Date/Time
Date(s) - 10/17/2016
3:00 pm - 4:30 pm

Location
E14-240, MIT

Categories No Categories


Abstract:
While many scan the foreseeable time horizon looking for killer robots with glowing eyes, or for amorphous superintelligent puppet masters, humanity has been quietly augmenting itself with artificial intelligence for decades if not centuries. To understand the long-term consequences of AI we need first to better examine our present. Here in joint work with Aylin Caliskan-Islam and Arvind Narayanan I show that human-like semantic biases are present in standard the standard NLP tools GloVe and word2vec applied to their standard Web-sourced corpora. We have replicated a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. I will describe these results, and also their implications both for human and AI ethics, which I will argue should be considered as one.
Bio:
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, currently visiting Princeton’s Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. She is best known for her work in systems AI and AI ethics, both of which she began during her PhD in the 1990s, but she and her colleagues publish broadly, into biology, anthropology, sociology, philosophy, cognitive science, and politics. She is currently collaborating on a project funded by Princeton’s University Center for Human Values, “Public Goods and Artificial Intelligence”, with Alin Coman of Princeton Psychology and Mark Riedl of Georgia Tech. This project includes both basic research in human sociality and experiments in technological interventions. Other current research includes work on understanding the causality behind the link between wealth inequality and political polarization, work on transparency in AI systems, and work on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT. At Bath she founded the Intelligent Systems research group (one of four in the Department of Computer Science) and heads their Artificial Models of Natural Intelligence.Sponsor(s): MIT Media Lab Scalable Cooperation Group