CHE Wanxiang

Professor
School of computer science and technology
Harbin Institute of Technology,P. R. China

Dr. Wanxiang Che, professor of the school of computer science and technology at Harbin Institute of Technology (HIT) and visiting scholar of Stanford University (at NLP group in 2012). His main research area lies in Natural Language Processing (NLP). He currently leads a national natural science foundation of China, a national 973 and a number of research projects. He has published more than 50 papers in high level journals and conferences. Among them, the paper published at AAAI 2013 achieved Outstanding Paper Honorable Mention Award. According to Google Scholar statistics, his papers were cited more than 2,500 times and the H-index is 27. He and his team have achieved good results in a number of international technical evaluations, such as the first place of CoNLL 2018 and 2019. The Language Technology Platform (LTP), an open source Chinese NLP system he leads to develop, has been authorized to more than 600 institutes and individuals including Baidu, Tencent and so on. He achieved the first prize of technological progress award in Heilongjiang province in 2016, Google focused research award in 2015 and 2016, the first prize of Hanwang youth innovation award and the first prize of Qian Weichan Chinese information processing science and technology award in 2010.

Keynote Speech: A New Paradigm for Natural Language Processing—A Pre-trained Language Model Approach

Abstract

Language is the fundamental symbol that distinguishes humans from animals. It has infinite semantic composition, high ambiguity, and continuous evolution. Accurate processing of natural language is an insurmountable gap for machines, and it has become one of the main bottlenecks restricting artificial intelligence to achieve greater breakthroughs and is known as "the jewel in the crown of artificial intelligence". In recent years, pre-trained language models based on super-large-scale raw corpora, represented by BERT and GPT, have sprung up. By making full use of large models, big data and large computing, the performance of almost all natural language processing tasks has been significantly improved. The pre-trained language model claims to have reached or exceeded the human level on some datasets, becoming a new paradigm for natural language processing. This report will firstly introduce the evolution process on the pre-trained models, then introduce the latest research progress of the pre-trained language models, especially our work in Chinese and multi-language pre-trained models, and finally, give an outlook on the future development trend in the field of natural language processing.

Conference Organizers

Conference Co-sponsors

Enquiry

The 6th International Symposium on Chinese Language and Discourse (6th ISCLD)

Connect with Us

Department of Chinese Language and Literature, University of Macau
Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University