The EXPLAIN Life Cycle
Three types of stakeholders are the scope of the EXPLAIN project:
- The data scientist or ML expert is in charge of creating the ML models. This includes preparing the data and developing, refining, and testing ML model. This individual has deep knowledge about machine learning.
- Chemical engineers, reliability engineers, or lab personal are examples of Domain Experts. They know the industrial process or the assets very well. Domain Experts can judge if ML model’s prediction and the explanations are in line with the domains first-principles.
- Every person that receives the output of a machine learning model and has to react to the output is a End-Users. Examples are plant operators, maintenance managers, or operators of quality stations.
Four new steps will be integrated into the lifecycle of AI projects:
- In the explanatory training phase, the subject matter experts teach the ML model. They receive model predictions and explanations and can provide feedback.
- The explanation review validates the ML models. The domain experts gain insight into the inner logic of the model. This helps to ensure that the concepts learned by the model are in line with the experts domain knowledge.
- The end user can receive or request explain the model output for each prediction.
- The end user interaction can serve to optimize the model. Incremental explanatory training utilizes the User’s feedback.