- Explanation mechanism and methods match the current user and the user’s context. This requires a good understanding of the mental models of end users.
- How to effectively interact with explanations and high dimensional industrial data?
- Generating high-quality, consistent, and machine-usable feedback is a challenge. It requires a comprehensive interaction design, taking workflows, effort, incentives, and end-user confidence.
- Industrial applications need explanation methods that produce meaningful explanations in a reliable fashion.
- Industrial data is often high dimensional and sequential. For instance, training data consists of multivariate time series or signal data. The direct application of feature attribution methods like LIME or SHAP is not suitable. The weights on the individual features (e.g. points in the multivariate time series) are difficult to interpret.
- The ML training must be enabled to process domain expert’s feedback.
ML Life Cycle Challenges in EXPLAIN Life Cycle
- The end-to-end MLOps framework must manage the complex and changing training data
- Some explanation types must access training data at the time of operation. Similar past situations are valuable information for end users. Hence, the MLOps system must manage the data-dependencies of ML models and explainers.
- The MLOps system must version the data, code and ML models. Adding explanation and explanatory trainings to the system makes the versioning more complex.
- MLOps includes monitoring ML models in operations. This includes monitoring of the model performance, data and concept drifts. Introducing explanations adds new information to track the health of ML models.