Angelica Chen

Logo

View My GitHub Profile

Angelica Chen

angelica[dot]chen[at]nyu.edu | Semantic Scholar profile

Hi! I’m a PhD student at NYU Center for Data Science in the Machine Learning for Language group, advised by Kyunghyun Cho. I’m primarily interested in deep learning for natural language processing, especially learning from rich feedback (e.g. natural language feedback and interactions with the environment) and understanding + evaluating how language models (LMs) learn. Recently, I’ve also become interested in the use of LLMs for biological applications.

I have previously worked as a student researcher at Google Research (Jun-Dec. 2021) on streaming models for disfluency detection and at Google Brain (Jun. 2022-Mar. 2023) on LMs for neural architecture search (NAS). Prior to NYU, I worked as a software engineer at Google and graduated with high honors from Princeton Computer Science. Sebastian Seung advised my senior thesis at Princeton, for which I received an Outstanding Computer Science Thesis award.

Outside of my research, I enjoy running, baking more pastries than I can feasibly eat, and cooking. I also volunteer as a rape and domestic violence crisis counselor/victim advocate for the NYC Crime Victims Treatment Center (at Lenox Health Greenwich Village and Brookdale Hospital) and a crisis counselor for Crisis Text Line.

Selected Papers

For a more complete list, see my Semantic Scholar profile.

Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
ICLR 2024 (Spotlight)
Chen, Angelica, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L. Leavitt, Naomi Saphra.
[OpenReview] [Arxiv]

Learning from Natural Language Feedback
Transactions on Machine Learning Research
Chen, Angelica*, Jérémy Scheurer*, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, Ethan Perez
[OpenReview] [GitHub]

EvoPrompting: Language Models for Code-Level Neural Architecture Search
NeurIPS 2023 (poster)
Chen, Angelica, David M. Dohan and David R. So
[OpenReview] [Arxiv]

Latent State Models of Training Dynamics
Transactions on Machine Learning Research
Michael Y. Hu, Angelica Chen, Naomi Saphra, Kyunghyun Cho
[Arxiv] [OpenReview]

Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Transactions on Machine Learning Research
Chen, Angelica, Jason Phang, Alicia Parrish, Vishakh Padmakumar, Chen Zhao, Samuel R. Bowman, Kyunghyun Cho.
[Arxiv] [OpenReview]

Pretraining Language Models with Human Preferences
ICML 2023 (oral)
Korbak, Tomasz, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Sam Bowman and Ethan Perez
[Arxiv]

Teaching BERT to Wait: Balancing Accuracy and Latency for Streaming Disfluency Detection
NAACL 2022 (oral)
Chen, Angelica, Victoria Zayats, Daniel David Walker and Dirk Ryan Padfield
[ACL Anthology]

Training Language Models with Natural Language Feedback
ACL 2022 Workshop on Learning with Natural Language Supervision
Scheurer, J’er’emy, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho and Ethan Perez
[Arxiv]

BBQ: A hand-built bias benchmark for question answering
ACL Findings 2022
Parrish, Alicia, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut and Sam Bowman
[Arxiv]

Seasonal dynamics of bacterial meningitis: a time-series analysis
The Lancet Global Health
Paireau, Juliette*, Angelica Chen*, Hélène Broutin, Bryan T. Grenfell and Nicole E. Basta.
[The Lancet]