Why EXPLAIN?
In industrial applications, Artificial Intelligence (AI) is expected to improve efficiency and sustainability . Yet, today many AI projects fail after the first phase of implementation. Only about 15% of projects launched are actually implemented or generate value. The opaque nature and the lack of transparency into the reasoning of AI systems is a major obstacle.
Even if a project produces high-quality AI models, process experts often do not trust models. As a result, process experts rely on experience and ignore the AI recommendations. This is often the case even if the AI proofed to be more efficient in most situations. To increase the trust, a model’s decisions should be transparent and comprehensible.
The goal of EXPLAIN is a new AI lifecycle for Industry 4.0.This new AI lifecycle will be explainable and transparent at every step in the life-cycle. Furthermore, subject matter experts and end-users will be involved in all steps of the process.
The consortium joins companies across different industries and with different roles in the AI supply chain with academic experts from Germany, Netherlands, and Sweden.
Explain Use Cases
Our project’s use cases highlight collaborative efforts and innovative solutions in the field of explainable artificial intelligence. Each case tackles specific challenges within this domain, aiming to enhance transparency and understanding in AI systems. Through partnerships and advanced technologies, we’re paving the way for practical and transparent AI solutions. Explore our use cases to see how we’re shaping the future of explainable artificial intelligence. More details of our project’s Use Cases can be found here.
Recent Updates
The EXPLAIN Life Cycle
In the scope of the EXPLAIN project we consider three categories of stakeholders that interact with the model:
- The Data Scientist or ML Expert is the person that is responsible to prepare data for the machine learning process and create, tune, and test the machine learning model. This person has deep knowledge about different types of machine learning models and the plethora of possible ML metrics.
- Chemical engineers, reliability engineers, or lab personal are examples of Domain Experts. They possess deep knowledge about industrial processes or assets and can judge whether a machine learning model’s prediction and the provided explanations are in line with the first-principles of the modelled problem.
- Plant operators, maintenance managers, or operators of quality stations are examples of End-Users. In the end, every person that receives the output of a machine learning model and have the responsibility to act or not act on the output.
Four new steps will be integrated into the lifecycle of AI projects:
- In the explanatory training phase, the subject matter experts together with the ML experts interact directly with the ML model as part of the training process, receive explanations of the model output and can provide feedback
- In an explanation review the solutions are validated; here the experts gain insight into the inner logic of the model to ensure that the concept learned by the model are in line with the experts domain knowledge.
- The end user is also integrated into the process and the AI system can receive or request explain the model output for each prediction.
- This, in turn, can serve to optimize the model by integrating feedback from the end user and in an incremental explanatory training.
Research Challenges
MMI Challenges
- A good understanding of the mental models of end users is necessary to relevant explanatory mechanisms that are relevant for the respective phase of the ML lifecycle, user role, and application context are specific.
- How can end users interact effectively with the explanations and the high dimensionality of industrial data?
- To generate high-quality, consistent, and machine-usable feedback, that requires a comprehensive interaction design that takes into account the perception of the workflows, effort, incentives, and end-user confidence.
Algorithmic Challenges
- Robust explanatory methods are needed that reliably and reproducibly meaningful explanations on the basis of the standards used for industrial applications.
- The explanatory methods must be able to deal with the high dimensionality and the sequential nature of the data, which in many industrial environments is use cases occur. The training data often consists of multivariate time series or signal data for which the direct application of feature attribution methods such as LIME or SHAP is not suitable, because individual characteristics in the raw data (points in a multivariate time series) are not suitable for interpretation.
- The ML training must be able to take into account the feedback from professionals, based on of explanations, to use.
ML Life Cycle Challenges in EXPLAIN Life Cycle
- The end-to-end MLOps framework must enable the management of complex and changing data that forms the basis for the ML models it manages.
- The data dependencies of ML models must be manageable in the context of MLOps for both new ML models and explanation components being developed and for ML models and explanation components in operation. Some explanation types require access to training data at the time of operation.
- The MLOps framework must provide tools for versioning data, functions, and ML models, similar to the versioning capabilities of software artifacts in today’s traditional DevOps environments. With the introduction of incremental explanation-based training of models, this process becomes more difficult.
- The MLOps framework must be able to monitor the performance and accuracy of ML models in operation. This type of ML model monitoring is required to address issues such as model or concept drift, similar to how software artifact performance is monitored today in traditional DevOps environments. Here, adding the explanation of the model results as an additional data point that should be used for monitoring the models used.