logo

About Pic

About EXPLAIN

Updates

Why EXPLAIN

The use of artificial intelligence in industry is very promising and holds out the prospect of significant improvements in production processes in many places. However, currently many AI projects fail after the first phase of implementation. Only about 15% of the projects launched are actually implemented or actually generate added value in the company (ROI). One major reason is that decisions and recommendation of AI systems are not transparent to the user.

Even if high-qualiy AI models can be created, business experts often find it difficult to trust the decisions made by the models as long as they have no insight into the process of decision making. As a result, the models are often not used in practice. Instead, experts prefer to rely on proven experience-based strategies, even if it has been shown that the efficiency of the models is far superior to these in the majority of all cases.

The EXPLAIN consortium has the goal of generating a complete AI lifecycle for Industry 4.0 that is not only interactive but also explainable and transparent at every step in the life-cycle. A key point is to involve subject matter experts and users in the process for all future developments of AI systems and platforms.

The EXPLAIN Life Cycle

In the scope of the EXPLAIN project we consider three categories of stakeholders that interact with the model:

  • The Data Scientist or ML Expert is the person that is responsible to prepare data for the machine learning process and create, tune, and test the machine learning. This person has deep knowledge about different types of machine learning models and the plothera of possible ML metrics.
  • Chemical engineers, reliability engineers, or lab personal are examples of Domain Experts. They possess deep knowledge about industrial processes or assets and can judge whether a machine learning model’s prediction and the provided explanations are in line with the first-principles of the modelled problem.
  • Plant operators, maintenance managers, or operators of quality stations are examples of End-Users. In the end, every person that receives the output of a machine learning model and have the responsibility to act on the output – or not.

Four new steps will be integrated into the lifecycle of AI projects:

  1. In the explanatory training phase, the subject matter experts together with the ML experts interact directly with the ML model as part of the training process, receive Explanations of the model output and can provide feedback
  2. In an Explanation Review the solutions are validated; here the experts gain insight into the inner logic of the model to ensure that relevant concepts of the domain have been implemented by the model comparable to the theoretical ideal. 
  3. The end user is also integrated into the process and the AI system can Explain the Model Output for each prediction. 
  4. This, in turn, can serve to optimize the model by integrating feedback from the end user and in an Incremental Explanatory Training.

Research Challenges

MMI Challenges

  1. A good understanding of the mental models of end users is necessary to relevant explanatory mechanisms that are relevant for the respective phase of the ML lifecycle, user role, and application context are specific.
  2. How can end users interact effectively with the explanations and the high dimensionality of industrial data?
  3. To generate high-quality, consistent, and machine-usable feedback, that requires a comprehensive interaction design that takes into account the perception of the workflows, effort, incentives, and end-user confidence.

Algortihmic Challenges

  1. Robust explanatory methods are needed that reliably and reproducibly meaningful explanations on the basis of the standards used for industrial applications. provide relevant data types.
  2. The explanatory methods must be able to deal with the high dimensionality and the sequential nature of the data, which in many industrial environments is use cases occur. The training data often consists of multivariate Time series or signal data that allow direct application of common feature assign- methods (see Section 3.1.1) such as LIME4 [RSG16a] or SHAP5 [LL17]. complicate, since individual characteristics in the raw data (points in a multivariate time series) are not suitable for interpretation [SAEA+19].
  3. The ML training must be able to take into account the feedback from professionals, based on of explanations, to use.

ML Life Cycle Challenges in EXPLAIN Life Cycle

  1. The end-to-end MLOps framework must enable the management of complex and changing data that forms the basis for the ML models it manages.
  2. The data dependencies of ML models must be manageable in the context of MLOps for both new ML models and explanation components being developed and for ML models and explanation components in operation. Some explanation types require access to training data at the time of operation.
  3. The MLOps framework must provide tools for versioning data, functions, and ML models, similar to the versioning capabilities of software artifacts in today’s traditional DevOps environments. With the introduction of incremental explanation-based training of models, this process becomes more difficult.
  4. The MLOps framework must be able to monitor the performance and accuracy of ML models in operation. This type of ML model monitoring is required to address issues such as model or concept drift, similar to how software artifact performance is monitored today in traditional DevOps environments. Here, adding the explanation of the model results as an additional data point that should be used for monitoring the models used.
Scroll to Top