top of page

Data Justice Lab Seminar Series: 

Towards Reliable and Inclusive Natural Language Processing

Mar 30, 2022

image-of-ruihong-huang_short.jpg

by Dr. Kai-Wei Chang
Assistant Professor, Department of Computer Science
University of California Los Angeles

Bio

Kai-Wei Chang is an assistant professor in the Department of Computer Science at the University of California Los Angeles. His research interests include designing robust machine learning methods for large and complex data and building fair and accountable language processing technologies for social good applications. Kai-Wei has published broadly in natural language processing, machine learning, and artificial intelligence. His research has been widely cited and covered by news media such as Wires, NPR, and MIT Tech Review. His awards include the Sloan Research Fellow (2021), EMNLP Best Long Paper Award (2017), the KDD Best Paper Award (2010), and the Okawa Research Grant Award (2018). Kai-Wei obtained his Ph.D. from the University of Illinois at Urbana-Champaign in 2015 and was a post-doctoral researcher at Microsoft Research in 2016.

Abstract

Natural Language Processing technologies have advanced drastically in recent years, and they have been applied in various real-world applications that touch our daily lives. Despite their remarkable performance, recent studies have shown that these technologies are vulnerable to adversarial examples, make predictions based on spurious correlations, and run the risk of aggravating the societal biases present in the data. Without properly quantifying and reducing the reliance on such correlations, the broad adoption of these models might have the undesirable effect of magnifying prejudice or harmful implicit biases that rely on sensitive demographic attributes. In this talk, I will take governing societal bias as a running example in order to demonstrate a collection of results that lead to greater control of NLP systems, such that they are socially responsible and accountable. First, I will present our studies on examining and mitigating societal biases in a wide spectrum of language tasks. Then, I will describe how we can identify biased predictions made by a model. I will conclude the talk by discussing my other interdisciplinary research projects and future research plans.

bottom of page