Maximising performance by understanding the potential weaknesses. The better the understanding of what the models are doing and why they sometimes fail, the easier it is to improve them.
By gaining an intuitive understanding of a model's behaviour, the individuals responsible for the model can spot when the model is likely to fail and take an appropriate action. XAI also helps to build trust by strengthening the stability, predicatability and repeatability of interpreatable models.
It is important to be clear who is accountable for an AI system's decisions. This in turn demands a clear XAI-enabled understanding of how the system operates, how it makes decisions or recommendations, and how to ensure it functions as intended.