I’m a Data Science PhD student at New York University and a member of ML² Group at CILVR, co-advised by Sam Bowman and Kyunghyun Cho. I’m broadly interested in deep learning and natural language understanding. These days, I mainly work on applying transfer learning and multi-task learning methods to NLP problems, and analyzing these methods to understand why and when they work/fail.
Previously, I interned at Facebook AI Research and Grammarly.
Prior to NYU, I developed Information Retrieval systems at Institute of High Performance Computing in Singapore. Before that, I earned my bachelor’s degree in Computer Science at Nanyang Technological University.
* equal contribution
Online Hyperparameter Tuning for Multi-Task Learning
Phu Mon Htut*, Owen Marschall*, Samuel R. Bowman, Douwe Kiela, Edward Grefenstette, Cristina Savin, Kyunghyun Cho.
The Workshop on Continual Learning, ICML. 2020. (Extended abstract)
[Paper] [Slides] (This work is preliminary)
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Yada Pruksachatkun*, Jason Phang*, Haokun Liu*, Phu Mon Htut*, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, Samuel R. Bowman.
English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Jason Phang*, Iacer Calixto*, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, Samuel R. Bowman.
jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models
Yada Pruksachatkun*, Phil Yeres*, Haokun Liu, Jason Phang, Phu Mon Htut, Alex Wang, Ian Tenney, Samuel R. Bowman.
ACL. 2020. (Demo track)
Do Attention Heads in BERT Track Syntactic Dependencies?
Phu Mon Htut*, Jason Phang*, Shikha Bordia*, and Samuel R. Bowman.
Natural Language, Dialog and Speech (NDS) Symposium, The New York Academy of Sciences. 2019. (Extended Abstract).
[Paper] [Poster] [Blog]
Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs.
Alex Warstadt*, Yu Cao*, Ioana Grosu*, Wei Peng*, Hagen Blix*, Yining Nie*, Anna Alsop*, Shikha Bordia*, Haokun Liu*, Alicia Parrish*, Sheng-Fu Wang*, Jason Phang*, Anhad Mohananey*, Phu Mon Htut*, Paloma Jeretic* and Samuel R. Bowman.
The Unbearable Weight of Generating Artificial Errors for Grammatical Error Correction.
Phu Mon Htut, Joel Tetreault.
The Workshop on Innovative Use of NLP for Building Educational Applications (BEA), ACL. 2019.
Generalized Inner Loop Meta-Learning.
Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala.
Grammar Induction with Neural Language Models: An Unusual Replication.
Phu Mon Htut, Kyunghyun Cho, Samuel R. Bowman.
The Workshop on the Analysis and Interpretation of Neural Networks for NLP (Blackbox-NLP). 2018. (Extended abstract)
[Paper] [arXiv] [Code/Output-Parses]
- DS-GA 1012: Natural Language Understanding and Computational Semantics, NYU (Spring-2020)
- Sequence-to-sequence learning Tutorial, African Master’s Program in Machine Intelligence (AMMI) (2020)
- DS-GA 1011: Natural Language Processing with Representation Learning, NYU (Fall-2018)