Fujitsu & Hokkaido University Develop “Explainable AI” Technology

0
338

Fujitsu Laboratories Ltd. and Hokkaido University announced the development of a new technology based on the principle of “explainable AI” that automatically presents users with steps needed to achieve a desired outcome based on AI results about data, for example, from medical checkups.

“Explainable AI” represents an area of increasing interest in the field of artificial intelligence and machine learning. While AI technologies can automatically make decisions from data, “explainable AI” also provides individual reasons for these decisions – this helps avoid the so-called “black box” phenomenon, in which AI reaches conclusions through unclear and potentially problematic means.

While certain techniques can also provide hypothetical improvements one could take when an undesirable outcome occurs for individual items, these do not provide any concrete steps to improve.

For example, if an AI that makes judgments about the subject’s health status determines that a person is unhealthy, the new technology can be applied to first explain the reason for the outcome from health examination data like height, weight, and blood pressure. Then, the new technology can additionally offer the user targeted suggestions about the best way to become healthy, identifying the interaction among a large number of complicated medical checkups items from past data and showing specific steps to improvement that take into account feasibility and difficulty of implementation.

Ultimately, this new technology offers the potential to improve the transparency and reliability of decisions made by AI, allowing more people in the future to interact with technologies that utilize AI with a sense of trust and peace of mind. Further details will be presented at the AAAI-21, Thirty-Fifth AAAI Conference on Artificial Intelligence opening from Tuesday, February 2.

Developmental Background

Currently, deep learning technologies widely used in AI systems requiring advanced tasks such as face recognition and automatic driving automatically make various decisions based on a large amount of data using a kind of black box predictive model. In the future, however, ensuring the transparency and reliability of AI systems will become an important issue for AI to make important decisions and proposals for society. This need has led to increased interest and research into “explainable AI” technologies.

For example, in medical checkups, AI can successfully determine the level of risk of illness based on data like weight and muscle mass. In addition to the results of the judgment on the level of risk, attention has been increasingly focused on “explainable AI” that presents the attributes that served as the basis for the judgment.

Because AI determines that health risks are high based on the attributes of the input data, it’s possible to change the values of these attributes to get the desired results of low health risks.

Issues

In order to achieve the desired results in AI automated decisions, it is necessary not only to present the attributes that need to be changed, but also to present the attributes that can be changed with as little effort as is practical.

In the case of medical checkups, if one wants to change the outcome of the AI’s decision from high risk status to low risk status, achieving it with less effort may seem to increase muscle mass but it is unrealistic to increase one’s muscle mass alone without changing one’s weight, so actually increasing weight and muscle mass simultaneously is a more realistic solution. In addition, there are many interactions between attributes such as weight and muscle mass, such as causal relationships in which weight increases with muscle growth, and the total effort required to make changes depends on the order in which the attributes are changed. Therefore, it is necessary to present the appropriate order in which the attributes are changed. In it is not obvious whether weight or muscle mass should be changed first in order to reach Change 2 from the current state, so it remains challenging to find an appropriate method of change taking into the account the possibility and order of changes from among a large number potential candidates.