Explainable AI: What You Need to Know
The next quest in the AI world is to make the existing machine learning techniques it employs more transparent. To uncover the not-so-proverbial ”black box”.
The so-called “hidden layers” in artificial neural networks that equip AI with the ability to learn are sometimes so well hidden that humans have no way of understanding how a machine arrives at certain conclusions. Not even to the individuals involved in the development of such programs. The idea now is to create AI whose actions can be well understood by humans.
Machine Learning models are opaque, non-intuitive, and difficult for people to understand.
-David Gunning, DARPA
Every day, more and more AI is being employed to aid or replace human decision making from business to healthcare to policing. The use of AI is no longer restricted to automating mindless, repetitive tasks. In high stake situations where decision making is involved, when an AI produces biased results it affects people in real life and much more.
The problem is not restricted to AI generating unintuitive results either but also results that are not desired. The data sets that AI learns from plays a huge role in its effectiveness. AI perfects what it is explicitly asked to perfect and in the process manages to lose the nuances that affect real-world situations. Machine learning techniques, however, are catching up at a rapid pace. We’ll soon have much more sophisticated algorithms than we do now which makes it imperative to address the issues of accountability for their results.
In recent news,
Amazon shut down its recruiting AI that demonstrated a bias against women.
Now that is a good example of a skewed data set yielding undesirable results. Essentially, the AI was tasked with finding the best candidates for a particular position. While it is easy to pinpoint why the AI may have arrived at a gender bias mirroring the disparity that pervades the tech space, the path to rectifying the problem is not that straightforward.
The goal with explainable AI, or XAI, is not only to understand how an AI arrives at a certain outcome but to also understand when it will fail or succeed to generate the correct results and when to trust or disregard its verdicts. In avenues like clinical decision support system (CDSS) which is designed to aid physicians with clinical decision making (sometimes in real-time) for improved health care, relying on AI with well-formulated reasoning behind its decisions makes a world of difference. Entrusting health care based decisions to artificial intelligence would require a great deal of trust which cannot be built if AI continues to be opaque and unintuitive.
A challenging aspect of introducing interpretability to AI would be to maintain its efficiency and accuracy at the same time. AI can be made explainable for simpler tasks and decisions but the real challenge would be to create explainable AI that continues to grow in complexity and nuance and could soon yield results directly applicable to the multifaceted world that we live in.