angelica[dot]chen[at]nyu.edu | Semantic Scholar profile
Hi! I’m a PhD student at NYU Center for Data Science in the Machine Learning for Language group, advised by Sam Bowman and Kyunghyun Cho. I’m broadly interested in deep learning for natural language understanding, code generation, model robustness, and improved evaluation metrics for NLU models. I have also previously worked as a student researcher at Google Research (Jun-Dec. 2021) on streaming models for disfluency detection and at Google Brain (Jun. 2022-Mar. 2023) on LMs for neural architecture search (NAS).
Prior to NYU, I worked as a software engineer at Google and graduated with high honors from Princeton Computer Science. Sebastian Seung advised my senior thesis at Princeton, for which I received an Outstanding Computer Science Thesis award.
Outside of my research, I enjoy running, baking more pastries than I can feasibly eat, and cooking. I also volunteer as a rape and domestic violence crisis counselor/victim advocate for the NYC Crime Victims Treatment Center (at Lenox Health Greenwich Village and Brookdale Hospital) and a crisis counselor for Crisis Text Line.
For a more complete list, see my Semantic Scholar profile.
Chen, Angelica, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, Ethan Perez. “Improving Code Generation by Training with Natural Language Feedback.” (2023). Arxiv.
Chen, Angelica, David M. Dohan and David R. So. “EvoPrompting: Language Models for Code-Level Neural Architecture Search.” (2023). Arxiv.
Korbak, Tomasz, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Sam Bowman and Ethan Perez. “Pretraining Language Models with Human Preferences.” (2023). Arxiv.
Chen, Angelica, Victoria Zayats, Daniel David Walker and Dirk Ryan Padfield. “Teaching BERT to Wait: Balancing Accuracy and Latency for Streaming Disfluency Detection.” Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022). ACL Anthology.
Phang, Jason, Angelica Chen, William Huang and Samuel R. Bowman. “Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair.” Dynamic Adversarial Data Collection (DADC) Workshop at NAACL 2022. Arxiv.
Scheurer, J’er’emy, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho and Ethan Perez. “Training Language Models with Natural Language Feedback.” Association for Computational Linguistics Workshop on Learning with Natural Language Supervision (2022). Arxiv.
Parrish, Alicia, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut and Sam Bowman. “BBQ: A hand-built bias benchmark for question answering.” Findings of the Association for Computational Linguistics (2022). Arxiv.
Shaw, Peter, Philip Massey, Angelica Chen, Francesco Piccinno and Yasemin Altun. “Generating Logical Forms from Graph Representations of Text and Entities.” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019). ACL Anthology.
Paireau, Juliette*, Angelica Chen*, Hélène Broutin, Bryan T. Grenfell and Nicole E. Basta. “Seasonal dynamics of bacterial meningitis: a time-series analysis.” The Lancet Global Health (2016). Full Text.