Smart Data & Analytics

Can Google’s new Explainable AI make it easier to understand artificial intelligence?

As researchers and insight professionals, we worry about unknown bias and whether AI algorithms reflect sufficiently diverse groups. Can Google’s new Explainable AI tools help build confidence in the outcomes and reduce risk when basing decisions on AI?

We talk to Tracy Frey, Director of Product Strategy and Operations, Cloud AI at Google about Explainable AI and how it could help clarify what’s going on.

Tracy Frey, Director of Product Strategy and Operations, Cloud AI at Google

Tell us about the background to Explainable AI and what it actually means?

There’s huge potential for businesses that incorporate AI – it can make processes more efficient and create new opportunities for customers. However, it can be challenging to bring machine-learning (ML) models because many teams don’t fully understand the technology. When teams can’t explain a machine learning model’s behavior and why/how it came to the conclusion it did, it can result in distrust and confusion. Explainable AI helps machine learning teams ship high-quality and safe models that people can trust. 

In addition, thinking through issues of fairness and unfair bias are a critical aspect of building great machine learning. It can be difficult, however, to know exactly what to look and account for and how to correct a model when needed. This is an exciting and active area of research, and our and other efforts at Explainable AI seek to provide tools and information to help customers think through these challenging and important aspects of the technology. 

How will it work and who will be able to use it?

Explainable AI is designed to help builders develop safe and inclusive machine learning models and deploy them with confidence. The biggest benefit is that now data scientists can not only understand the AI models they’re deploying but also know exactly where to make any necessary adjustments. 

Here some customer examples from our blog:

Sky: “Understanding how models arrive at their decisions is critical for the use of AI in our industry. We are excited to see the progress made by Google Cloud to solve this industry challenge. With tools like What-If, and feature attributions in AI Platform, our data scientists can build models with confidence, and provide human-understandable explanations.” —Stefan Hoejmose, Head of Data Journeys, Sky

Wellio: “Introspection of models is essential for both model development and deployment. Oftentimes we tend to focus too much on predictive skill when in reality it’s the more explainable model that is usually the most useful, and more importantly, the most trusted. We are excited to see these new tools made by Google Cloud, supporting both our data scientists and also our models’ customers.” —Erik Andrejko, CTO, Wellio 

Explainable AI consists of tools and frameworks to deploy interpretable and inclusive ML models. AI Explanations for models hosted on AutoML Tables and Cloud AI Platform Predictions are available. You can pair AI Explanations with our What-If Tool to get a complete picture of your model’s behavior — check out this blog post for more information. To start making your own AI deployments more understandable, please visit: https://cloud.google.com/explainable-ai

Will it help companies test and customize their own machine learning models?

Yes, Explainable AI includes Explanations within Google Cloud AI solutions like AutoML Tables and AI Platform that help organizations to understand feature attributions of deployed models. Overall, it further simplifies model governance and optimization through continuous evaluation of models managed using AI Platform and allows organizations to create and implement trustworthy ML algorithms. 

What kinds of decisions will it impact and how will it help?

While there is no silver bullet to solve the problem of unfair bias, it aims to provide tools that builders can use to apply these practices throughout the ML lifecycle. It’s important for organizations to understand why their AI and ML models are suggesting decisions or making predictions and whether there’s bias or not.

For example, Google developed the What-If Tool to include a Fairness & Performance tab that allows customers to slice datasets and see how the model behaves on different subsets. It also offers built-in optimization strategies for demographic, parity, equal opportunity, and many types of known biases. Our goal is to help data scientists implement AI that reduces the risk of bias.  

We’re seeing customers across industries use Explainable AI to improve and understand their ML algorithms. For example, for Wellio, a meal planning app, introspection of models is essential for both model development and deployment. The company can focus on why their models make certain decisions, rather than the predictability, supporting both data scientists and customers.

Ethical and privacy experts are now discussing good governance principles for applying AI.  Who should be managing this? 

Sundar Pichai, Google CEO, recently explained how Google is thinking about AI regulations and why the responsibility falls on companies. As he notes, Google, and other large companies, have a responsibility to make sure the technology they build is used for good and available to everyone. 

At Google, we’re committed to the responsible development of AI. The AI Principles are concrete standards and practices we’ve built that govern our research and product development and impact our business decisions.

Our principles hold us, and our customers, accountable to be socially beneficial, avoid creating or reinforcing unfair bias, uphold high standards of scientific excellence. In Cloud, we’ve created a set of internal governance processes that have helped us align our work with our AI Principles, and we’ve learned a tremendous amount that has helped us refine and grow those processes over time.

We believe this work is intimately tied to the long-term success of AI as it is deployed in enterprises and is a core part of how we develop our products and solutions. 

Responsible AI is a shared responsibility and Google is committed to sharing knowledge, research, tools, datasets, and other resources with the larger community. We continue to work with a range of stakeholders, drawing on multidisciplinary approaches and we have used external human risk impact assessments where it makes sense. We share some of our current work and recommended practices at: https://ai.google/responsibilities/responsible-ai-practices/

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles