Zhen Wang / ηŽ‹ιœ‡

Hi! I'm a postdoctoral researcher working with Prof. Eric Xing from CMU/MBZUAI and Prof. Zhiting Hu from UCSD. My research interests lie in natural language processing and machine learning. I received my PhD from The Ohio State University, advised by Prof. Huan Sun.

In summer 2022, I interned at MIT-IBM Watson AI Lab working with Rameswar, Yoon, Leonid, and Rogerio on efficient adaptation of large language models. In summer 2021, I was a research intern at the NLP group in Microsoft Research, Redmond, working with Nebojsa and Kolya, studying coherence boosting and prompting calibration on GPT-3. In summer 2020, I was a research intern at the Data Science team in NEC Labs America, working with Bo Zong, exploring commonsense knowledge representation and reasoning.

Email  /  CV (Feb 2023)  /  GitHub  /  Twitter  /  Google Scholar

profile photo

Research Overview

My research is rooted in human-centered AI with the aim to infuse machine learning models, especially foundation models, with a human-like understanding of the world and knowledge to enable AI as both a reliable assistant and an insightful collaborator. I believe future AI systems need to not only augment human capabilities but also resonate with human values, understandings, accessibility, and proactive problem-solving. My ultimate goal is not to build AI that merely replicates or replaces human abilities, but to develop systems that enrich human experiences, amplify human potential, honor human values, and actively collaborate with humans in addressing real-world challenges.

  1. Interpreting and steering ML models towards human values: Transparency is key in human-centered AI. I develop methodologies to unlock the black box to enhance our understanding of their behavior, and ensure they align with human values via more robust control and prompting techniques. [ACL 2020] [ACL 2022] [ACL 2023] [New preprint]
  2. Adapting and transferring knowledge for dynamic human needs: Human-centered AI demands AI systems that can swiftly adapt and learn in response to the changing needs and circumstances of its human users. I develop efficient methods to transfer knowledge between AI systems and adapt them across diverse tasks and domains for greater accessibility. [ICLR 2023] [NAACL SUKI 2022] [EACL 2023]
  3. Augmenting models to proactively solve real-world problems for humans: An active problem-solving drive is a distinctive trait of human intelligence. I aim to elevate AI systems from passive respondents into proactive problem solvers by interacting with the physical world and novel domains. [NeurIPS 2023] [EMNLP 2023]

Research Opportunities: I consistently seek out highly motivated students, particularly from underrepresented groups, to join me in various research projects both during the school year and throughout the summer. If you are eager to enhance your research abilities and be a part of this exciting opportunity, kindly email me expressing your interest.





PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization

Xinyuan Wang*, Chenxi Li*, Zhen Wang*, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric Xing, Zhiting Hu
Presented at SoCal NLP 2023
PDF / Code

GPT Is Becoming a Turing Machine: Here Are Some Ways to Program It

Ana Jojic, Zhen Wang, Nebojsa Jojic



Reasoning with Language Model is Planning with World Model

Shibo Hao*, Yi Gu*, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu
[EMNLP 2023] (Oral, Main) The 2023 Conference on Empirical Methods in Natural Language Processing
Also presented at NeurIPS GenPlan'23 Workshop / SoCal NLP 2023
PDF / Code / Slides / Poster / Featured in State of AI Report 2023

LLMs lack internal world models for effective reasoning. Reasoning via Planning (RAP) reformulates LLM reasoning as a planning problem, thus incorporating an external world model and principled planning seamlessly. This is a new framework applicable across varying tasks and an exciting direction for LLM augmentation research.


ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings

Shibo Hao, Tianyang Liu, Zhen Wang, Zhiting Hu
[NeurIPS 2023] (Oral) Thirty-seventh Conference on Neural Information Processing Systems
Also presented at SoCal NLP 2023, Best Paper Award
PDF / Code / Slides / Poster

ToolkenGPT augments LLMs with massive tools/APIs by representing tools as tokens (β€œtoolken”) and enabling tool calls in the same way as generating regular words. ToolkenGPT is super efficient for learning massive tools, as plugging in new tools is as easy as learning embeddings.


ThinkSum: Probabilistic Reasoning Over Sets Using Large Language Models

Batu Ozturkler, Nikolay Malkin, Zhen Wang, Nebojsa Jojic
[ACL 2023] The 61st Annual Meeting of the Association for Computational Linguistics (Main)
PDF / Code / Slides / Poster

We propose a two-stage probabilistic inference paradigm, ThinkSum, to improve LLMs' abilities of reasoning over multiple objects in two steps, Think (e.g., retrieval of associations) and Sum (e.g., aggregation of results), which beats chain-of-thought prompting in hard BIG-bench tasks.


Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning

Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, Yoon Kim
[ICLR 2023] The Eleventh International Conference on Learning Representations
PDF / Code / Slides / Poster / Huggingface PEFT PR

We propose Multitask Prompt Tuning (MPT) to exploit the rich cross-task knowledge for more efficient and generalizable transfer learning. MPT learns a single transferrable soft prompt through the use of a novel combination of prompt decomposition and prompt distillation.


Entity Tracking via Effective Use of Multi-Task Learning Models

Janvijay Singh, Fan Bai, Zhen Wang
[EACL 2023] The 17th Conference of the European Chapter of the Association for Computational Linguistics (Main)
PDF / Code / Slides / Poster

How to transfer multi-task knowledge from pre-training to niche downstream tasks, such as entity tracking on the procedural text? We show that you can reach STOA performance by simply fine-tuning T5 but with specialized QA prompts and task-specific decoding.



Toward Knowledge-Centric NLP: Acquisition, Representation, Transfer, and Reasoning

Zhen Wang
The Ohio State University, Ph.D. Dissertation, 2022

Coherence Boosting: When Your Pretrained Language Model is Not Paying Enough Attention

Nikolay Malkin, Zhen Wang, Nebojsa Jojic
[ACL 2022] The 60th Annual Meeting of the Association for Computational Linguistics
PDF / Code / Slides / Poster (Long Paper, Oral Presentation)

We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present Coherence Boosting, an inference procedure that increases an LM’s focus on a long context, which greatly improves NLG and NLU tasks.


Knowledge Transfer between Structured and Unstructured Sources for Complex Question Answering

Lingbo Mo*, Zhen Wang*, Jie Zhao, Huan Sun
[SUKI@NAACL 2022] NAACL 2022 Structured and Unstructured Knowledge Integration
PDF / Code / Slides / Poster *Equal contribution

We study knowledge transfer for multi-hop reasoning processes between structured (Knowledge Base) and unstructured (text corpus) knowledge. We design SimultQA unifying KBQA and TextQA systems and leverage it to study how reasoning is transferred between two knowledge sources.



Bootstrapping a User-Centered Task-Oriented Dialogue System

Shijie Chen, Ziru Chen, Xiang Deng, Ashley Lewis, Lingbo Mo, Samuel Stevens, Zhen Wang, Xiang Yue, Tianshu Zhang, Yu Su, Huan Sun
[Alexa Prize TaskBot Challenge] 1st Proceedings of Alexa Prize TaskBot (Alexa Prize 2021)
PDF / Third-place honor in the TaskBot Finals!

We build TacoBot, a task-oriented dialogue system for the inaugural Alexa Prize TaskBot Challenge, to assist users in multi-step cooking and home improvement tasks. We propose several data augmentation methods, such as GPT-3 simulation, to bootstrap neural dialogue systems into new domains and make them more robust to noise user initiatives.


Modeling Context Pair Interaction for Pairwise Tasks on Graphs

Zhen Wang, Bo Zong, Huan Sun
[WSDM 2021] The 14th ACM International Conference on Web Search and Data Mining
PDF / Code / Slides / Poster (Long Paper, Online Presentation)

We propose to explicitly model context interactions for pairwise prediction tasks on graphs, which consist of two perspectives, node-centric and pair-centric. We also propose to pre-train pair embeddings to facilitate the pair-centric model.



Rationalizing Medical Relation Prediction from Corpus-level Statistics

Zhen Wang, Jennifer Lee, Simon Lin, Huan Sun
[ACL 2020] The 58th Annual Meeting of the Association for Computational Linguistics
PDF / Code / Slides / Poster / Video (Long Paper, Online Presentation)

We propose a self-interpretable framework to rationalize the neural relation prediction based on corpus-level statistics. This framework is inspired by human cognitive theory about recall and recognition, which provides structured knowledge triplets as rationales.


Graph Embedding on Biomedical Networks: Methods, Applications, and Evaluations

Xiang Yue, Zhen Wang, Jingong Huang, Srinivasan Parthasarathy, Soheil Moosavinasab, Yungui Huang, Simon Lin, Wen Zhang, Ping Zhang, Huan Sun
[Bioinformatics] Volume 36, Issue 4, 15 February 2020, Pages 1241-1251
PDF / Code / Slides / Poster

We benchmark 11 representative graph embedding methods on five important biomedical tasks. We verify the effectiveness of recent graph embedding methods and provide general guidelines for their usage.



SurfCon: Synonym Discovery on Privacy-Aware Clinical Data

Zhen Wang, Xiang Yue, Soheil Moosavinasab, Yungui Huang, Simon Lin, Huan Sun
[KDD 2019] The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PDF / Code / Slides / Poster (Research Track, Long Paper, Oral Presentation)

We propose to discover structured knowledge--synonyms--from the privacy-aware text corpus and present a novel framework to leverage both surface form and context information to discover out-of-distribution synonyms.

Before 2019


A Comprehensive Study of StaQC for Deep Code Summarization

Jayavardhan Reddy Peddamail, Ziyu Yao, Zhen Wang, Huan Sun
[KDD 2018 Deep Learning Day] The 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PDF / Code / Slides / Poster (SPOTLIGHT)

We examine three popular datasets mined from Stack Overflow on the code summarization task and show that StaQC (Stack Overflow Question-Code pairs) helps achieve substantially better results.


Hessian Regularized Sparse Coding for Human Action Recognition

Weifeng Liu, Zhen Wang, Dapeng Tao, Jun Yu
[MMM 2015] The 21th International Conference on Multimedia Modeling
PDF / Code / Slides / Poster / Bibtex

We propose Hessian regularized sparse coding (HessianSC) for action recognition, which can preserve the local geometry well and steer the sparse coding varying linearly along the manifold of data distribution.

Honors and Awards

  • Top Reviewer Award, NeurIPS, 2023
  • Best Paper Award, SoCal NLP, 2023
  • Third-Place Honor, Inaugural Alexa Prize TaskBot Challenge, 2022
  • Graduate Research Award, CSE, OSU, 2022
  • Graduate Student Research Poster Award (Top 5), CSE, OSU, 2021
  • SIGIR Student Travel Grant, 2021
  • Rising Stars in Data Science, Center for Data and Computing (CDAC), University of Chicago, January 2021
  • SIGKDD Student Travel Award, 2019
  • China Scholarship Council (CSC) Scholarship (fully funded visiting program in Polytech Nice Sophia), Nice, France, 2015
  • National Scholarship, China, 2014
  • Soong Ching Ling Foundation (SCLF) Scholarship, China, 2013
  • National Scholarship for Encouragement, China, 2012


    Area Chair/Senior PC Member:
    • NLPCC 2023
    Program Committee Member:
    • ACL ARR (Oct'21, Nov'21, Jan'22, Apr'22, Sep'22, Oct'22, Dec'22, Feb'23)
    • NAACL (2021, 2022 SUKI 2022 Workshop)
    • EMNLP (2021, 2022, 2023)
    • ACL (2021, 2023)
    • ICML 2023
    • NeurIPS 2023
    • ICLR 2024
    • KDD 2023
    • AAAI (2023, 2024)
    • NLPCC (2020, 2021, 2022)
    External Reviewer:
    • KDD (2019, 2020), ACL 2018, ICDM 2018

Source code from Leonid Keselman, design and inspiration from Jon Barron and Dongkuan.