By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

Entering the Age of Explainable AI

With the increasing importance being placed on ethical AI, organizations are turning more to “explainable AI” -- a set of processes and methods that allow them to understand the output of machine learning algorithms.

Across every industry, businesses are facing rapidly changing market conditions and stronger competition, making it more critical than ever to make decisions and innovate with data. Embracing artificial intelligence and machine learning technologies offers the potential for organizations to transform all aspects of their business for competitive advantage. By leveraging AI-driven insights, businesses can streamline operations, boost efficiency, cut costs, and provide higher-quality products and services to their customers.

For Further Reading:

The Three Most Important Emerging AI Trends in Data Analytics

Mastering AI Quality: Strategies for CDOs and Tech Leaders

Why Women Make a Difference When Developing AI Solutions

However, although it’s true AI promises many business benefits, it also presents challenges for organizations looking to capitalize on this technology. These include:

  • Data quality and availability issues
  • A lack of technical expertise (e.g., programming, database modeling, etc.)
  • Not having the right infrastructure and operational framework in place
  • The absence of leadership support needed to push these initiatives through

Furthermore, there are regulatory and ethical concerns that go beyond the confines of the organization, such as the White House’s recent executive order on AI, developed to safeguard consumer data privacy and protect users from AI’s potential to perpetuate bias and discrimination. Noncompliance with these regulations can lead to legal issues and reputational damage for businesses.

As a result, more emphasis is being placed on “explainable AI” -- a set of processes and methods that enables users to understand and interpret the outcome of their machine learning and AI algorithms. Explainable AI (XAI) helps to build trust among users in the accuracy of AI’s results and predictions. The use of XAI is becoming more critical for businesses as users seek greater transparency into their interactions with AI and assurance that the models they’re using adhere to governance and compliance regulations.

Why Is Explainable AI Important?

As AI becomes more widely adopted across industries, it’s essential the models businesses use are trustworthy and accurate. This is especially true for mission-critical applications such as online banking, traffic control, medical diagnosis and treatment, and military drones, where the consequences of models gone wrong could have drastic outcomes.

XAI decreases risks that come about when businesses don’t understand what their models are doing, as with black box models using sensitive proprietary data (e.g., healthcare and financial services). Explainability techniques clarify model behaviors and help build trust, transparency, accountability, and regulatory compliance. XAI enables users to detect any bias that machine learning may have developed. Having this method in place helps organizations minimize risk, feel good about their results, and achieve greater business success.

Establishing Explainable AI

Creating an enterprise AI strategy can be an overwhelming prospect in itself. According to a global AI study by Altair (the company I work for), artificial intelligence (AI) and data analytics projects fail between 36-50% of the time due to three main friction points: people, technology, and investment. Knowing this, establishing transparent and explainable AI calls for even greater collaboration between teams, and commitment from leadership to invest in the technology infrastructure and tools needed for success. Breaking this down further, here are three best practices for achieving XAI.

Best Practice #1: Ensure data transparency

Having access to good, clean data is always a crucial first step for businesses thinking about AI transformation because it ensures the accuracy of the predictions made by AI models. If the data being fed into the models is flawed or contains errors, the output will also be unreliable and is subject to bias. Investing in a self-service data analytics platform that includes sophisticated data cleansing and prep tools, along with data governance, provides business users with the trust and confidence they need to move forward with their AI initiatives.

These tools also help with accountability and -- consequently -- data quality. When a code-based model is created, it can take time to track who made changes and why, leading to problems later when someone else needs to take over the project or when there is a bug in the code. Low-code platforms are self-documenting. This means that the visual workflows created are also accompanied by documentation that explains what the workflow does. This makes it easier to understand and maintain the code and provides full transparency to the team accessing the results.

Best Practice #2: Bolster AI literacy among users

Equally important to the technology is ensuring that data analytics methodologies are both accessible and scalable, which can be accomplished through training. Data scientists are hard to come by and you need people who understand the business problems, whether or not they can code. No-code/low-code data analytics platforms make it possible for people with limited programming experience to build and deploy data science models. This helps democratize data science, enabling multiple people to work on data projects simultaneously while also contributing to accountability and ultimately data quality and accuracy.

To succeed in AI today, which includes driving innovation and achieving ROI while meeting government regulations and customer expectations, businesses need people throughout their organization who are continuously analyzing the data, building and revisiting models, and looking for new opportunities to create change. This can only be achieved through training.

Best Practice #3: Continuously audit AI models to identify risks

Building AI models is not a one-and-done project. Models require continuous monitoring. Effective model management helps proactively identify potential risks and ensure ongoing reliability. Having a challenger model prepared to take over should the current model erode adds an extra layer of protection. This approach to model oversight helps safeguard against unforeseen challenges and bolsters the overall reliability of the models.

Conclusion

Creating business transformation through AI can be a rewarding, yet overwhelming, prospect. Many companies struggle to succeed, which means wasted investments, misallocated time, and a failure to deliver on commitments made to shareholders. AI transparency is crucial for accountability, trust, ethical considerations, and regulatory compliance. XAI helps to ensure systems and applications are developed and deployed in a responsible and beneficial way.

About the Author

Mark Do Couto is senior vice president, data analytics at Altair, where he is responsible for the global strategy for the data analytics business unit. You can reach the author via email or on LinkedIn.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.