Xinyuan Wang

Hi! I am Xinyuan Wang (王心远).

me.png

3869 Miramar st. Box 1214

La Jolla, California, USA, 92092

I am now an MSCS student at University of California, San Diego (UCSD), expected to graduate in Jun. 2024. And I plan to pursue a Ph.D. in the future. I am luckily to be mentored by professors at UCSD in both Natural Language Processing and Computer Vision, and have hands-on experiences in both fields. I am now mentored by Prof. Zhiting Hu and Postdoc. Zhen Wang in Large Language Model (LLM) Reasoning, Agent, and Prompting. I am also mentored by Prof. Zhuowen Tu in generative models (diffusion model). Before UCSD, I graduated from Central South University (CSU) in Hunan, China, mentored by Prof. Ying Zhao.

Research Interests

  • Large Language Models (LLMs) with World Models: Augmenting LLMs with a world model formulation to enable principled decision-making, planning, and simulation. Enhancing the LLM’s abilities in reasoning, planning, and interacting with the world. (LLM Reasoners)
  • Foundation Model Prompting: Employing interpretable prompting to bridge the domain gap between user objectives and the outputs of foundation models. Effectively boosting the performance of foundation models on complex tasks through efficient and effective prompting. (PromptAgent)
  • Semantic Enhancement and Control of Generative Models: Generative models, such as text-to-image models, sometimes exhibit semantic inconsistencies and challenges in control. My aim is to integrate semantic information into the models during training or inference to enhance their semantic fidelity, reliability, and controllability.

Research Overview

My research interests are LLM Augmentation (Prompting, Reasoning), LLM Agent, Unsupervised Learning (Generative Models), and Multi-modal Models. In Prof. Zhiting Hu’s group, I worked on automatic LLM prompt optimization with Zhen. Recently our paper PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization is accepted by ICLR 2024. I am also working on LLM Reasoning by contributing to the LLM Reasoners library, which ensembles the most recent LLM reasoning methods and models. In Prof. Zhuowen Tu’s group, we are working on how to inprove diffusion models’ conceptual performance with an end-to-end loss. During my undergraduate years, I was mentored by Prof. Ying Zhao and worked on Interpretation of Convolutional Neural Networks and Visualization. Here is my graduate thesis: The Research on The Interpretability Method of DeepNeural Network Based on Average Image

How to contact me

Email: xiw136@ucsd.edu (till Jun. 2024) / xywang626@gmail.com

News

Jan 16, 2024 PromptAgent is accepted by ICLR 2024 (The Twelfth International Conference on Learning Representations)!
Nov 17, 2023 PromptAgent’s poster is presented at SoCal NLP 2023 at UCLA, Los Angeles, CA!
Oct 25, 2023 Paper published on Arxiv! PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization
Sep 1, 2022 Start my Master of Science Computer Science program at UC San Diego!
Jun 1, 2022 Graduate from Central South University!

Selected Publications

  1. promptagent_header.png
    PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization
    Xinyuan Wang, Chenxi Li, Zhen Wang, and 6 more authors
    [ICLR 2024] The Twelfth International Conference on Learning Representations, 2024
  2. medicalbert.png
    Reduce the medical burden: An automatic medical triage system using text classification BERT based on Transformer structure
    Xinyuan Wang, Make Tao, Runpu Wang, and 1 more author
    In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), 2021
  3. fastmethod.png
    A Fast Method for Detecting Minority Structures in a Graph
    Fangfang Zhou, Qi’an Chen, Yunlong Cui, and 4 more authors
    In Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, 2020