The Five Stakeholders of Explainability

Two of the focus questions in our SemSys group is how to make information systems to be auditable and how to augment complex systems such as smart grids to be explainable. To this end, explainability can provide stakeholders with insights on the systems’ behavior in particular for complex systems such as cyber-physical systems. We adopt the idea from studies by Bhatt et al. on the stakeholders of explainable machine learning for the general auditable systems context. The following are the stakeholders: 

  1. Executives

Explainability plays the role of supporting ethical principles to enhance the trustworthiness and reliability of the system. In this regard, the role of the executives is to ensure the principle is carried out through the systems within their organizations.

  1. Engineers

Engineers use explainability both in the development and operational phases of the system life cycle. Engineers use explainability in the development phase for evaluating and improving system performance. In the operation phase the engineers use explanations to help them resolve failures and maintain smooth operation.

  1. End-users

End-users interact with the system and are affected directly by the misbehavior of the system. Therefore, end-users are the consumers that require the most intuitive explanations. That is, the provided explanation needs to be presented in the most intuitive ways that are understandable by the end-users.

  1. Regulator

The regulators need to know the extent of the explainability that is expected of a system and the possibility of such explainability being provided accordingly. They need this information to make necessary regulations that will enforce deployed systems to provide an explanation to the affected users. 

  1. Auditor

The auditors are the domain experts who are tasked to evaluate the explanation output of a system using their expert knowledge and intuitions. The role is needed in order to check whether the explainability component and the resulting explanations comply with the existing regulations provided by the regulator. 

References

Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J.M. and Eckersley, P., 2020, January. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 648-657).

The Royal Society, 2019, Explainable AI: the basics Policy briefing. https://royalsociety.org/topics-policy/projects/explainable-ai/ (last accessed: 21 July 2020)