Zhen Wang / 王震

Hi! I'm Zhen, currently a postdoctoral researcher at UC San Diego, working with Prof. Zhiting Hu and Prof. Eric P. Xing. I earned my PhD from The Ohio State University, advised by Prof. Huan Sun. I have the privilege of working closely with leading research teams at UCSD, CMU, and MBZUAI.

My research focuses on machine learning and natural language processing, developing adaptive and reliable AI systems that enhance human productivity and creativity.

When I’m tired of chatting with ChatGPT, you’ll likely find me outdoors. I love hiking, playing pickleball, taking road trips, and visiting national parks whenever possible. I’m also a passionate sports fan, cheering for the Buckeyes, Dodgers, Lakers, Inter Miami, and Chiefs (at least for now!).

Email  /  GitHub  /  Twitter  /  Google Scholar

profile photo
At a rooftop in Anchorage, Alaska 2019

Research Overview

My research revolves around the interplay between World Models and Language Models, advancing the frontier of machine reasoning and self-optimization. By enhancing LMs with WMs and enabling them to self-improve, I aim to create AI systems that proactively assist both non-experts and experts in tackling real-world challenges, especially in scientific discovery. Specifically, my research answers the following questions with systematic and principled approaches.

  • World Model and Language Model (WMxLM): WMs form the cornerstone of next-generation foundation models. How can we enable the next-generation reasoning of LMs with WM formulation? How to leverage STOA LMs as the foundation to build more powerful WMs? How to integrate WMs into LMs to simulate (domain-specific) complex environments and reliably predict the future?
  • Self-Optimized Language Model Systems: How can LMs dynamically optimize and adapt their behavior in real-time? How can an LM reflect on its own errors and optimize itself on the fly? How to optimize LMs during inference time, maybe with the help of WMs? How to self-optimize the model within a multi-agent system?
  • AI-Accelerated Scientific Discovery: How can AI systems, particularly agentic LMs augmented with World Models and self-optimization, accelerate breakthroughs in scientific research? How to build scientific agentic systems to boost scientists' research productivity? How to generate and validate highly-plausible scientific hypotheses? How to build more powerful scientific foundation or world models?

Research Opportunities: I consistently seek out highly motivated students, particularly from underrepresented groups, to join me in various research projects both during the school year and throughout the summer. If you are interested in LLM augmentation (reasoning, tool-using, planning, etc), LLM agents, and AI4Science research, kindly email me expressing your interest.

News


Source code from Leonid Keselman, design and inspiration from Jon Barron and Dongkuan.