Angelica Chen

Logo

angelica[dot]chen[at]nyu.edu

View My GitHub Profile

Selected Papers | Invited Talks | Google Scholar | Twitter

Hi! I’m a PhD student at NYU Center for Data Science in the Machine Learning for Language group, advised by Kyunghyun Cho. I’m primarily interested in deep learning models for language, especially learning from rich feedback (e.g. natural language feedback and interactions with the environment) and understanding + evaluating how large language models (LLMs) learn. Recently, I’ve also become interested in the use of LLMs for biological applications.

I have worked as a student researcher at Google Research on streaming models and at Google Brain on evolution with LLMs. Prior to NYU, I worked as a SWE at Google and graduated with high honors from Princeton Computer Science. Sebastian Seung advised my senior thesis at Princeton, for which I received an Outstanding Computer Science Thesis award.

Outside of my research, I enjoy running and baking more pastries than I can feasibly eat. I also volunteer as a rape and domestic violence crisis counselor/victim advocate for the NYC Crime Victims Treatment Center and a crisis counselor for Crisis Text Line.

Selected Papers


My work is largely split into three general directions – understanding LLM training, improving how LLMs learn from feedback, and evaluating LLMs. For a more complete list of my papers, please see Semantic Scholar.

Understanding LLM Training

Preference Learning Algorithms Do Not Learn Preference Rankings
Preprint
Oral at ICML 2024 Workshop on Models of Human Feedback for AI Alignment (MHFAIA)
Chen, Angelica, Sadhika Malladi, Lily H. Zhang, Xinyi Chen, Qiuyi Zhang, Rajesh Ranganath, Kyunghyun Cho.
[Arxiv] [GitHub]

Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
ICLR 2024 (Spotlight)
Chen, Angelica, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L. Leavitt, Naomi Saphra.
[OpenReview] [Arxiv]

Latent State Models of Training Dynamics
Transactions on Machine Learning Research
Michael Y. Hu, Angelica Chen, Naomi Saphra, Kyunghyun Cho
[Arxiv] [OpenReview]

Improving How LLMs Learn From Feedback

Playing Large Games with Oracles and AI Debate
Preprint
Xinyi Chen, Angelica Chen, Dean Foster, Elad Hazan.
[Arxiv] [GitHub]

EvoPrompting: Language Models for Code-Level Neural Architecture Search
NeurIPS 2023 (poster)
Chen, Angelica, David M. Dohan and David R. So
[OpenReview] [Arxiv]

Learning from Natural Language Feedback
Transactions on Machine Learning Research
Chen, Angelica*, Jérémy Scheurer*, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, Ethan Perez
[OpenReview] [GitHub]

Pretraining Language Models with Human Preferences
ICML 2023 (oral)
Korbak, Tomasz, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Sam Bowman and Ethan Perez
[Arxiv]

Teaching BERT to Wait: Balancing Accuracy and Latency for Streaming Disfluency Detection
NAACL 2022 (oral)
Chen, Angelica, Victoria Zayats, Daniel David Walker and Dirk Ryan Padfield
[ACL Anthology]

Evaluating LLMs

Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Transactions on Machine Learning Research
Chen, Angelica, Jason Phang, Alicia Parrish, Vishakh Padmakumar, Chen Zhao, Samuel R. Bowman, Kyunghyun Cho.
[Arxiv] [OpenReview]

QuALITY: Question Answering with Long Input Texts, Yes!
NAACL 2022
Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, Samuel Bowman
[ACL Anthology]

BBQ: A hand-built bias benchmark for question answering
ACL Findings 2022
Parrish, Alicia, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut and Sam Bowman
[ACL Anthology]

Invited Talks