The EXPLAIN Life Cycle

EXPLAIN seeks to develop an end-to-end machine learning lifecycle that is interactive and explainable for industrial domain experts. The entire process, starting with data preparation and ending with model deployment and observation, will be made accessible to individuals who may not have a technical background in machine learning. The scope includes three types of stakeholders: data scientists or ML experts, domain experts, and end-users.

Four new steps will be integrated into the lifecycle of AI projects:

(1) In an explanatory training phase, domain experts and ML experts interact directly with the ML model during the training process. They can receive explanations of the model results and provide feedback. The data scientist or ML expert is in charge of creating the ML models. This includes preparing the data and developing, refining, and testing ML model. This individual has deep knowledge about machine learning.

(2) In the explanatory validation phase, ML solutions are validated by providing domain experts with insights into the internal reasoning of the trained model to ensure that relevant concepts are learned from the provided data. Examples of Domain Experts are chemical engineers, reliability engineers, or lab personal. They know the industrial process or the assets very well. Domain Experts can judge if ML model’s prediction and the explanations are in line with the domains first-principles.

(3) ML model results are explained to end users, and end users can give feedback and trigger (4) incremental explanatory training utilizing the user’s feedback to optimize the model. Every person that receives the output of a machine learning model and has to react to the output is an End-Users. Examples are plant operators, maintenance managers, or operators of quality stations.

To provide an MLOps platform with these capabilities, we developed an initial architecture implementation based on the companies’ experiences and project requirements, partly described in An Analysis of MLOps Practices. The project pursues the following objectives:
  1. Develop an end-to-end data and MLOps infrastructure considering the need for seamless explanation methods and leveraging explanations for the model tests, monitoring, improvement, and auditing.
  2. Provide an infrastructure template which describes containerized components (Data Management, Data Monitoring, Model Training, ML IDE, Model Management, Explainability Tools, Model Serving, Model Monitoring, Feedback Component) and their interactions and that can be deployed in the cloud or on premise.
Scroll to Top