Seunghak Yu

Short Bio

I am a postdoctoral associate at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). I received the B.S. and M.S. degrees in electrical engineering, and Ph.D. degree from Korea University, South Korea. I held a research position at Seoul National University and Samsung Research, South Korea. My research interests include machine learning and its application to natural language processing. After joining MIT, I have been working on fake news detection and natural language understanding for multimodal system.

News

Jan.2021: I joined Amazon Alexa AI as an applied scientist.
Jul.2020: Our paper has been selected as Best Demo Paper (Honorable Mention) at ACL 2020.
Apr.2020: Our paper has been accetped to IJCAI 2020.
Apr.2020: Our paper has been accetped to ACL 2020.
Sep.2019: One paper has been accepted to NeurIPS 2019 Joint Workshop on AI for Social Good.
Aug.2019: One paper has been accetped to EMNLP 2019.
Feb.2019: Our paper has been accetped to CVPR 2019.
Jan.2019: I joined the MIT CSAIL as a postdoctoral associate.
Oct.2018: I received Gold (1/179), Silver (2/179), and Bronze (4/179) prizes at Samsung Best Paper Award 2018.
Aug.2018: Three papers have been accepted to EMNLP 2018.
Jun.2018: Our ConZNet is at the top of the MS MARCO leaderboard.
Jun.2018: One paper has been accepted to COLING 2018.
May.2018: One paper has been accepted to ACL 2018 workshop on MRQA.
Jun.2017: One paper has been accepted to EMNLP 2017 workshop on SCLeM.
May.2017: One paper has been accepted to Interspeech 2017.
Oct.2016: One paper has been accepted to NIPS 2016 workshop on DRL.

Publications

(see also Google Scholar page)

A Survey on Computational Propaganda Detection
Giovanni Da San Martino, Stefano Cresci, Alberto Barrón-Cedeño, Seunghak Yu, Roberto Di Pietro, and Preslav Nakov
in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Survey Track, Yokohama, Japan, January 2021.

Prta: A System to Support the Analysis of Propaganda Techniques in the News - [Best Demo Paper (Honorable Mention)]
Giovanni Da San Martino, Shaden Shaar, Yifan Zhang, Seunghak Yu, Alberto Barrón-Cedeño, and Preslav Nakov
in Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), Online, July 2020.

Fine-Grained Analysis of Propaganda in News Articles
Giovanni Da San Martino, Seunghak Yu, Alberto Barrón-Cedeño, Rostislav Petrov and Preslav Nakov
in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong, November 2019.

Factor Graph Attention
Idan Schwartz, Seunghak Yu, Tamir Hazan, Alexander Schwing
in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, June 2019.

MemoReader: Large-Scale Reading Comprehension through Neural Memory Controller
Seohyun Back, Seunghak Yu*, Sathish Indurthi, Jihie Kim and Jaegul Choo*
(*: co-corresponding authors)
in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium, November 2018.

Cut to the Chase: A Context Zoom-in Network for Reading Comprehension
Sathish Indurthi, Seunghak Yu*, Seohyun Back and Heriberto Cuayáhuitl
(*: corresponding author)
in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium, November 2018.

Supervised Clustering of Questions into Intents for Dialog System Applications
Iryna Haponchyk, Antonio Uva, Seunghak Yu, Olga Uryupina and Alessandro Moschitti
in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium, November 2018.

On-Device Neural Language Model based Word Prediction
Seunghak Yu*, Nilesh Kulkarni*, Haejun Lee and Jihie Kim
in Proceedings of the International Conference on Computational Linguistics (COLING) : System Demonstrations, Santa Fe, USA, August 2018.

A Multi-Stage Memory Augmented Neural Network for Machine Reading Comprehension
Seunghak Yu, Sathish Indruthi, Seohyun Back and Haejun Lee
in Proceedings of the ACL Workshop on Machine Reading for Question Answering, Melbourne, Australia, July 2018.

Syllable-level Neural Language Model for Agglutinative Language
Seunghak Yu*, Nilesh Kulkarni*, Haejun Lee and Jihie Kim
in Proceedings of the EMNLP Workshop on Subword and Character LEvel Models in NLP, Copenhagen, Denmark, September 2017.

Deep Reinforcement Learning of Dialogue Policies with Less Weight Updates
Heriberto Cuayáhuitl and Seunghak Yu
in Proceedings of the Interspeech, Stockholm, Sweden, August 2017.

Scaling Up Deep Reinforcement Learning for Multi-Domain Dialogue Systems
Heriberto Cuayáhuitl, Seunghak Yu, Ashley Williamson and Jacob Carse
in Proceedings of the International Joint Conference on Neural Networks (IJCNN), Anchorage, Alaska, May 2017.

Deep Reinforcement Learning for Multi-Domain Dialogue Systems
Heriberto Cuayáhuitl, Seunghak Yu, Ashley Williamson and Jacob Carse
Thirtieth Annual Conference on Neural Information Processing Systems (NIPS) Workshop on Deep Reinforcement Learning, Barcelona, Spain, December 2016.

Ensemble learning can significantly improve human microRNA target prediction
Seunghak Yu, Juho Kim, Hyeyoung Min and Sungroh Yoon
Methods, 69(3), 220-229 2014.