Christopher D. Manning
Biography
Christopher Manning is the leading researcher in the increasingly impactful area of natural language processing (NLP) and computational linguistics. He co-wrote what was the definitive textbook on statistical NLP, and his role in the inception of the field of statistical NLP cannot be overstated. Manning has made an amazingly diverse set of contributions to almost every aspect of NLP, from syntactic parsing and part-of-speech tagging to information extraction, semantic analysis, machine translation, sentiment analysis, text summarization, and natural language inference. First, he was a leader in statistical NLP and then became an early adopter, promoter, and contributor to neural “deep learning” methods that, again, revolutionized the field and lead to the current wave of achievements in language understanding. Early on, Manning, along with his graduate student Dan Klein, made important contributions to syntactic problems such as part-of-speech tagging and parsing, and this work is generally acknowledged as a breakthrough in the field of unsupervised language learning. Manning and his students also produced the Stanford CoreNLP toolkit which became the standard NLP software package. Almost everyone in the field used the Stanford parser to provide basic syntactic analysis. Manning then became one of the earliest proponents of deep learning for NLP. Following the development of the popular Word2Vec technique, Manning and his students developed an improved word embedding model, GloVe, which became a standard approach to lexical semantics. Manning’s work on neural attention contributed to the eventual development of the transformer self-attention framework that revolutionized the field and led to current large-scale language models like BERT and GPT, which are now changing the world.
Manning is the Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science, Stanford University, Stanford, California, USA.