Christopher Manning


Fireside Chat with Christopher Manning

Aug 8, 2019

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1995, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a fellow of ACM, AAAI, and the Association for Computational Linguistics. Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze, 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008). His recent work has concentrated on probabilistic approaches to natural language processing (NLP) problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.
 

[SAIF2020] Day2: Natural Language Processing - Christopher Manning | Samsung

Nov 16, 2020

Session 1. Natural Language Processing

“Natural Language Understanding and Conversational AI”

Natural language processing (NLP) has made dramatic advances over the last three years, ranging from deep generative models for text-to-speech, such as WaveNet, through the extensive deployment of deep contextual language models, such as BERT. Pre-training with models like BERT has significantly raised the performance of almost all NLP tasks, allowed much better domain adaptation, and brought us human-level performance for tasks like answering straightforward factual questions. New neural language models have also brought much more fluent language generation. On the one hand, we should not be too impressed by these linguistic savants: Things like understanding the consequences of events in a story or performing common sense reasoning remain out of reach. But on the other hand, I will discuss how we now live in an era where there are many good commercial uses of NLP, with much of the heavy lifting already done in the construction of large but downloadable models. I present some of our work on understanding how these models learn to be so proficient, and how we can build new types of pre-trained models that are much more compute efficient. Finally, I turn to conversational agents, where neural models can produce accurate task-based dialog agents and more effective open domain social bots.
 
Back
Top