Zhen Wang / 王震

Hi! I'm a computer science researcher working on natural language processing, machine learning, and data mining. I received my PhD from the Department of Computer Science and Engineering of The Ohio State University, advised by Prof. Huan Sun.

In summer 2022, I interned at MIT-IBM Watson AI Lab working with Rameswar, Yoon, Leonid, and Rogerio on efficient adaptation of large language models. In summer 2021, I was a research intern at the NLP group in Microsoft Research, Redmond, working with Nebojsa and Kolya from Mila, studying coherence boosting and prompting calibration on GPT-3. In summer 2020, I was a research intern at the Data Science team in NEC Laboratories, America working with Bo Zong, exploring commonsense knowledge representation and reasoning.

Email  /  CV (Nov., 2022)  /  GitHub  /  Twitter  /  Google Scholar  /  LinkedIn  /  Semantic Scholar

profile photo

Research

I am interested in empowering current AI systems with more explicit and human-understandable knowledge, aiming to make them more generalizable, interpretable and data efficient. My research lies in the nexus of natural language processing, deep learning, data mining, and studies the "full stack" of the knowledge-centric AI from ground up: acquisition, representation, transfer, and reasoning. My long-term research goal is to transfuse strengths of human learning capabilities (e.g., intuitive physics, commonsense reasoning) to the next evolution of AI systems.

  • Knowledge Acquisition: Structured information extraction from text and graphs, knowledge graph construction, knowledge distillation from large language models
  • Knowledge Representation: Word representation learning, graph embedding learning, graph neural networks, commonsense concept learning
  • Knowledge Transfer: Transfer learning, multi-task learning, knowledge distillation, domain adaptation and generalization, few-shot learning
  • Knowledge Reasoning: Multi-hop reasoning over text and graphs (KG reasoning, complex QA), neuro-symbolic reasoning, commonsense reasoning
  • Applications: Natural language interfaces (dialogue systems, question answering), controllable text generation, text summarization, zero-/few-shot language model prompting, knowledge discovery for healthcare/bioinformatics

Free feel to reach out if you’d like to have a chat 🤗

News

Publications

Preprint

ThinkSum

ThinkSum: Probabilistic Reasoning Over Sets Using Large Language Models


Batu Ozturkler, Nikolay Malkin, Zhen Wang, Nebojsa Jojic
[arXiv] 2210.01293

2023

mpt_iclr2023

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning


Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, Yoon Kim
[ICLR 2023] The Eleventh International Conference on Learning Representations
PDF / Code / Slides / Poster

We propose Multitask Prompt Tuning (MPT) to exploit the rich cross-task knowledge for more efficient and generalizable transfer learning. MPT learns a single trasnferrable soft prompt through the use of a novel combination of prompt decomposition and prompt distillation.

meet_eacl2023

Entity Tracking via Effective Use of Multi-Task Learning Models


Janvijay Singh, Fan Bai, Zhen Wang
[EACL 2023] The 17th Conference of the European Chapter of the Association for Computational Linguistics (Main)
PDF / Code / Slides / Poster

How to transfer multi-task knowledge from pre-training to niche downstream tasks, such as entity tracking on the procedural text? We show that you can reach STOA performance by simply fine-tuning T5 but with specialized QA prompt and task-specific decoding.

2022

cb_acl2022

Coherence Boosting: When Your Pretrained Language Model is Not Paying Enough Attention


Nikolay Malkin, Zhen Wang, Nebojsa Jojic
[ACL 2022] The 60th Annual Meeting of the Association for Computational Linguistics
PDF / Code / Slides / Poster (Long Paper, Oral Presentation)

We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present Coherence Boosting, an inference procedure that increases a LM’s focus on a long context, which gets huge improvement on NLG and NLU tasks.

SimultQA

Knowledge Transfer between Structured and Unstructured Sources for Complex Question Answering


Lingbo Mo*, Zhen Wang*, Jie Zhao, Huan Sun
[SUKI@NAACL 2022] NAACL 2022 Structured and Unstructured Knowledge Integration
PDF / Code / Slides / Poster *Equal contribution

We study knowledge transfer for multi-hop reasoning processes between structured (Knowledge Base) and unstructred (text corpus) knowledge. We design SimultQA unifying KBQA and TextQA systems and leverage it to study how the reasoning is transferred between two knowledge sources.

2021

SimultQA

Bootstrapping a User-Centered Task-Oriented Dialogue System


Shijie Chen, Ziru Chen, Xiang Deng, Ashley Lewis, Lingbo Mo, Samuel Stevens, Zhen Wang, Xiang Yue, Tianshu Zhang, Yu Su, Huan Sun
[Alexa Prize TaskBot Challenge] 1st Proceedings of Alexa Prize TaskBot (Alexa Prize 2021)
PDF / Third-place honor in the TaskBot Finals!

We build TacoBot, a task-oriented dialogue system for the inaugural Alexa Prize TaskBot Challenge to assist users in multi-step cooking and home improvement tasks. We propose several data augmentation methods, such as GPT-3 simulation to bootstrap neural dialogue systems into new domains and make them more robust to noise user initiatives.

conpi_wsdm_2021

Modeling Context Pair Interaction for Pairwise Tasks on Graphs


Zhen Wang, Bo Zong, Huan Sun
[WSDM 2021] The 14th ACM International Conference on Web Search and Data Mining
PDF / Code / Slides / Poster (Long Paper, Online Presentation)

We propose to explicitly model context interactions for pairwise prediction tasks on graphs, which consists of two perspectives, node-centric and pair-centric. We also propose to pre-train pair embeddings to facilitate the pair-centric model.

2020

x-clinrela_acl__2020

Rationalizing Medical Relation Prediction from Corpus-level Statistics


Zhen Wang, Jennifer Lee, Simon Lin, Huan Sun
[ACL 2020] The 58th Annual Meeting of the Association for Computational Linguistics
PDF / Code / Slides / Poster / Video (Long Paper, Online Presentation)

We propose a self-interpretable framework to rationalize the neural relation prediction based on corpus-level statistics. This framework is inspired by human cognitive theory about recall and recognition, which provides structured knowledge triplets as rationales.

graph_bioinformatics

Graph Embedding on Biomedical Networks: Methods, Applications, and Evaluations


Xiang Yue, Zhen Wang, Jingong Huang, Srinivasan Parthasarathy, Soheil Moosavinasab, Yungui Huang, Simon Lin, Wen Zhang, Ping Zhang, Huan Sun
[Bioinformatics] Volume 36, Issue 4, 15 February 2020, Pages 1241-1251
PDF / Code / Slides / Poster

We benchmark 11 representative graph embedding methods on 5 important biomedical tasks. We verify the effectivenes of recent graph embedding methods and provide general guidelines for their usage.

2019

surfcon_kdd_2019

SurfCon: Synonym Discovery on Privacy-Aware Clinical Data


Zhen Wang, Xiang Yue, Soheil Moosavinasab, Yungui Huang, Simon Lin, Huan Sun
[KDD 2019] The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PDF / Code / Slides / Poster (Research Track, Long Paper, Oral Presentation)

We propose to discover structured knowledge, synonyms from privacy-aware text corpus and present a novel framework to leverage both surface form and context information to discover out-of-distribution synonyms.

Before 2019

code_kdd_2018_dlday

A Comprehensive Study of StaQC for Deep Code Summarization


Jayavardhan Reddy Peddamail, Ziyu Yao, Zhen Wang, Huan Sun
[KDD 2018 Deep Learning Day] The 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PDF / Code / Slides / Poster (SPOTLIGHT)

We examine three popular datasets mined from Stack Overflow on the code summarization task and show that StaQC (Stack Overflow Question-Code pairs) helps achieve substantially better results.

hessian_mmm_2015

Hessian Regularized Sparse Coding for Human Action Recognition


Weifeng Liu, Zhen Wang, Dapeng Tao, Jun Yu
[MMM 2015] The 21th International Conference on Multimedia Modeling
PDF / Code / Slides / Poster / Bibtex

We propose Hessian regularized sparse coding (HessianSC) for action recognition, which can well preserve the local geometry and steer the sparse coding varying linearly along the manifold of data distribution.

Honors and Awards

  • Third-Place Honor, Inaugural Alexa Prize TaskBot Challenge, 2022
  • Graduate Research Award, CSE, OSU, 2022
  • Graduate Student Research Poster Award (Top 5), CSE, OSU, 2021
  • SIGIR Student Travel Grant, 2021
  • Rising Stars in Data Science, Center for Data and Computing (CDAC), University of Chicago, January 2021
  • SIGKDD Student Travel Award, 2019
  • China Scholarship Council (CSC) Scholarship for a fully funded visiting program in Polytech Nice Sophia, Nice, France, 2015
  • National Scholarship, China, 2014
  • Soong Ching Ling Foundation (SCLF) Scholarship, China, 2013
  • National Scholarship for Encouragement, China, 2012

Services

    Program Committee Member:
    • ACL ARR (Oct'21, Nov'21, Jan'22, Apr'22, Sep'22, Oct'22, Dec'22)
    • SUKI 2022 Workshop at NAACL 2022
    • EMNLP 2021, 2022
    • ACL 2021, 2023
    • NAACL 2021
    • KDD 2023
    • AAAI 2023
    • NLPCC (2020, 2021, 2022)
    External Reviewer:
    • KDD (2019, 2020), ACL 2018, ICDM 2018


Source code from Leonid Keselman, design and inspiration from Jon Barron and Dongkuan.