The pace at which artificial intelligence (AI) is advancing is remarkable. As we look out at the next few years, one thing is clear: AI will be celebrated for its benefits but also scrutinized and, to some degree, feared. It is crucial that for AI to benefit everyone, it is developed and used in ways that warrant people’s trust. Conversations around topics like fairness and privacy are important in the context of intelligent systems that are increasingly making decisions that thus far were only made by humans. These conversations are complex, contextualized and require the engagement of societies globally.
After offering the “Why”, this talk will outline a set of principles for defining ethics and responsibility in the age of AI: the “What”. While principles are necessary, having them alone is not enough. The hard and essential work begins when you endeavor to turn those principles into practices: the “How”. Building blocks for operationalizing responsible AI include tools and techniques, but also require practices, governance mechanisms, standards as well as policy and regulatory interventions. AI ethics and responsibility is an evolving field with people across industry, academia, governments, and policy makers actively engaged, and the talk will summarize some of the recent developments in this space.

Key Takeaways: