“Intelligent” but WHY

Key Takeaways
  • Without the answer to these “Why” questions from the ‘Intelligent Bots’ or AI solutions, end-user teams, regulators, and/or the customers/consumers are not ready to take these outputs at face value.
  • Explainability in AI is also a multi-dimensional technical problem in itself
  • Initiatives from industry bodies in India e.g. NASSCOM, India, and NITI Aayog, are also focusing on XAI and trusted unbiased AI

The new focus for skilled human resources and organizations holding and supporting pools of these resources is professional endurance. The tool that they have traditionally looked to for this, a tool that enables us to transform the abstract or a challenge into innovation, is technology – technology that is eventually operated by a human resource. The advent of AI is changing that; organizations are rethinking and swiftly building AI-powered approaches to gauge the efficacy of skills and expertise learned for the known world. Many factors influence their decisions and some of those factors are:

  • COVID-19 adding a new layer of dexterity or set of existing challenges to operate within a new environment.
  • Shifting social and industrial mindset for a predefined skill (Agriculture – Pharma).
  • The vast gap between practical professional skills needed by the industries- e.g., AI solutions design, vs. the academic pedagogy on AI technologies (Innovation as a subject, without any hands-on experience)
  • Change Leadership and Change in leadership (Management, approach, and new paradigms)

 

While AI is fast becoming a multi-trillion-dollar opportunity in itself – for sectors across industries, government & public services agencies – our 2020 surveys show that 90% AI projects have remained confined in the pilot/proof-of-concept stages and have not been able to deliver the targeted outcomes at scale and in production, primarily due to a lack of trust and hence unenthusiastic adoption by users.

Explainability of the decisions and actions taken by the autonomous/AI systems is a key factor towards building user-trust, thereby improving adoption. Hence, explainable AI has become one of the top-researched areas in AI technologies, in the deep-tech space e.g. explainable algorithms and AI tech platforms (e.g. Clarify from AWS), to explainable AI applications and use-cases in IT service providers’ AI solutions, and in AI implementations in end-user industries.

Explainability in AI is not new. For the last 30-40 years, the ‘expert systems’, particularly in the late 90’s, tried to explain dynamic rule-based inference engines’ decisions and recommendations. However, the targeted problems for AI got much closer to the real world, away from theoretical math exercises and lab experiments. Practical enterprise problems got more complex in terms of the 3 V’s of Volume, Velocity, and Variety. Consequently, the knowledge representation schemas and machine learning algorithms became multi-layered, multi-dimensional, and complicated. With the rapid proliferation of neural networks & deep learning, the hidden layers become black-boxes where it is practically impossible, even for large teams of data scientists, to trace & manually check dynamic model updates and adjustments of attributes and weights at every network layer. Explanations on why a model behaves in a certain way are needed: e.g. classifies a loan application as ‘approved’ vs. ‘not approved’, or recommend a specific product to a specific customer.

Without the answer to these “Why” questions from the ‘Intelligent Bots’ (AI-powered automations) or AI solutions, end-user teams, regulators, and/or the customers/consumers are not ready to take these outputs at face value. This has much larger implications – beyond the technical aspects – in terms of audibility, transparency, fairness, and legal liabilities, for enterprise AI applications as well as in governance- and citizen-facing AI use cases.

These are the key AI application and adoption challenges that are handled by explainable AI (XAI, as coined by DARPA) algorithms and techniques. Depending on the nature of input data and the output models, there are simple to highly complex techniques to explain the modelling process and the model outputs of different types of AI algorithms.  For example, one of the most commonly adopted AI solutions in Banking is autonomous loan application processing. XAI is the most critical requirement for this AI use case because the loan applicants demand a plausible, reasonable, and interpretable explanation on why their loan applications have been deemed incomplete or have not been accepted. This is why – a best practice well-established in some of the future-proof banks in the US is to integrate the ML models to a XAI algorithm, to generate the basis of the explanation of the classification attributes and relevance.

Similarly, AI-powered claims to process and claims fraud detection are common AI use cases deployed in global insurance companies, some of which operate in India and APAC, along with strong regional partners. For claims processing and fraud detection, XAI is crucial for providing answers to the ‘Why’ questions e.g. why a claim is not approved or is deemed as potentially fraudulent, based on certain features.

Explainability in AI is also a multi-dimensional technical problem in itself. For example, it needs to deal with 1) Model transparency and visibility in the internal modelling process and not just the output; 2) Proof of fairness of the output e.g. data & model biases within acceptable & measurable limits; 3) Local vs. global explainability; 4) Interpretability, quality, support and confidence of the explanations produced by algorithms. Transparency in the modelling process is also a key input in data security and privacy protection regulations and practices.

Explainability is already being included as a mandatory consideration and requirement in several advanced governments’ AI policies and enterprise AI strategies. The emerging international standards and subsequent regulations on AI e.g. the IEEE P7000 are making explainability and transparency as integral elements of AI applications. Initiatives from industry bodies in India e.g. NASSCOM, India, and NITI Aayog, are also focusing on XAI and trusted unbiased AI. These efforts will not only improve trust and adoption of AI but will also make the AI-powered/autonomous systems more transparent, secure, fair, and auditable, irrespective of the scale e.g. on quantum computing platforms, in the cloud, or at the edge. The high-tech and sunrise sectors like telecom, retail, BFSI, and healthcare, are the early-movers in this journey-path, towards building and adopting enterprise-grade, well-governed, responsible, and ethical AI applications.


Your email address will not be published. Required fields are marked *

By using this form you agree with the storage and handling of your data by this website.

RELATED POST