Curated Content | Thought Leadership | Technology News

Handling the Challenges of AI: Real World Experience Sharing

Biases, hallucinations and risk profiling.
Walt Carter
Contributing CIO
Artificial technologies and competition concept. Hands of human and robot competing in strength power fighting technologies against humans vector illustration.

The rise of artificial intelligence (AI) is a topic on everyone’s mind right now. Recently, I had the opportunity to sit down with a friend and colleague, Rajendra Gangavarapu, who I consider my “go-to” expert for anything related to AI. Rajendra has been building, training, and perfecting AI models for more than fifteen years in the highly regulated financial services and banking industry.

We talked about the reality of his work in AI and its implications, hoping to share insights that will benefit those who are concerned about the path forward in AI.

How do you and your teams deal with the biases and hallucination problems we’re hearing so much about?

Effectively training and refining models to reduce biases and hallucinations involves a multi-faceted approach that integrates technical measures with organizational practices.

The first approach I have found effective for ensuring Data Quality and Representativeness is to be intentional about Diverse and Inclusive Data Sets. Collect data from a wide range of sources and ensure it accurately represents the diversity of the population or use case scenario. This includes considering gender, race, age, socioeconomic status, and other relevant demographics to prevent biases.   

Second, implement rigorous data cleaning and preprocessing techniques to identify and correct errors, inconsistencies, and irrelevant data. This step is crucial for reducing noise that can lead to inaccuracies or hallucinations. 

Next, we regularly conduct AI Impact Assessments for two distinct purposes: 

  • Risk Assessment: Before deployment, we conduct comprehensive AI impact assessments to identify potential biases and areas where hallucinations could occur. This involves evaluating the data, model, and application context.
  • Mitigation Strategies: We develop strategies to mitigate identified risks, such as adjusting data sets, altering model parameters, or implementing additional safeguards.

Testing is even more important in the AI ecosystem, so we focus our testing efforts in these areas:

  • Performance Benchmarking: We test AI models against real-world scenarios and benchmarks to ensure they perform as expected across various conditions and demographics.
  • Continuous Learning: Understanding that we’ve created a continuous learning machine, we implement mechanisms to test and periodically update the model with new data, helping it to adapt and improve over time.

Are there other significant practices or protocols that we should be implementing to reduce our risk of deploying these tools within our companies?   

Yes, there are a number of additional practices that we follow to make deployment safer and better for our people. Things like Independent Evaluation and Ongoing Monitoring, for example, where we engage independent evaluators to audit AI systems, providing an unbiased assessment of performance, biases, and potential hallucinations. 

We also establish ongoing monitoring and evaluation processes to continually assess the AI system’s performance and impact. This includes tracking any emerging biases or inaccuracies. 

There’s quite a bit of fear about our ability to govern these learning machines – are there things that you’ve found to lessen this for your group?  

Great question, yes. We have implemented a governance framework that includes regular human review and feedback loops. We use a “Human-in-the-Loop (HITL)” approach, incorporating regular human review of AI outputs to identify and correct errors, biases, or hallucinations. Human reviewers provide nuanced judgments that AI currently cannot. Additionally, our feedback loops allow users to report inaccuracies or biases, giving them a voice and actively contributing to model refinement and learning.

Our governance framework also incorporates an intense focus on Ethical Guidelines and Transparency.  It is my intention to adopt ethical guidelines for AI development and use, including principles of fairness, accountability, and transparency. We practice “explainability” as well by developing models and tools that offer explainable AI decisions. Understanding the “why” behind AI decisions is crucial for identifying and correcting biases. 

For long-term success, our governance framework includes pillars for Training and Awareness. We aim to build AI literacy within our organization and train developers, data scientists, and relevant stakeholders on understanding biases, ethical AI use, and methods to reduce hallucinations. The Awareness pillar is key to preventing issues, and we encourage diversity among teams developing and managing AI systems to bring a broad range of perspectives, reducing unconscious biases in AI development. 

Implementing these practices requires a commitment from all levels of an organization, from leadership endorsing ethical AI use to technical teams applying best practices in model training and evaluation. By adopting a comprehensive and proactive approach, organizations can significantly reduce biases and hallucinations in AI systems, leading to more equitable and effective applications. 

In banking and insurance, we have been algorithmically defining risk for a long time – how do the current crop of AI tools affect the risk profiling and decision processes for companies?  and for customers? 

You’re right. The use of AI tools has significantly impacted risk profiling and decision processes in banking and insurance in several ways. First, AI-powered tools can quickly analyze large amounts of structured and unstructured data to detect patterns and anomalies, enabling more accurate risk assessment. This helps companies identify risks more effectively, especially emerging risks. 

Second, AI models can reduce human biases in risk profiling by basing decisions on data-driven insights rather than subjective factors. However, AI models can also introduce new forms of algorithmic bias, which companies need to carefully manage.

Third, we see that AI can aggregate diverse data sources to create more comprehensive customer risk profiles, including factors like adverse media, sanctions, and politically exposed persons (PEPs). This provides a holistic view to support faster, more informed decisions. 

In the area of predictive analytics, AI enables the use of predictive models to forecast risks and potential future scenarios, allowing companies to be more proactive in their risk management. AI has been widely adopted in areas such as credit underwriting, fraud detection, customer behavior analysis, and marketing campaigns.

Finally, AI can automate repetitive risk assessment tasks, freeing up human analysts to focus on more complex, high-value work. This can significantly improve productivity and compliance. However, the use of AI also introduces new challenges, such as ensuring transparency, explainability, and accountability of AI-driven decisions, especially when they impact customers. Companies must carefully manage these risks to realize the full benefits of AI in risk management. In the banking and financial services sector, we are continuously challenged by regulators to demonstrate and ensure transparency in our AI practices.

☀️ Subscribe to the Early Morning Byte! Begin your day informed, engaged, and ready to lead with the latest in technology news and thought leadership.

☀️ Your latest edition of the Early Morning Byte is here! Kickstart your day informed, engaged, and ready to lead with the latest in technology news and thought leadership.

ADVERTISEMENT

×
You have free article(s) left this month courtesy of CIO Partners.

Enter your username and password to access premium features.

Don’t have an account? Join the community.

Would You Like To Save Articles?

Enter your username and password to access premium features.

Don’t have an account? Join the community.

Thanks for subscribing!

We’re excited to have you on board. Stay tuned for the latest technology news delivered straight to your inbox.

Save My Spot For TNCR LIVE!

Thursday April 18th

9 AM Pacific / 11 PM Central / 12 PM Eastern

Register for Unlimited Access

Already a member?

Digital Monthly

$12.00/ month

Billed Monthly

Digital Annual

$10.00/ month

Billed Annually

Would You Like To Save Books?

Enter your username and password to access premium features.

Don’t have an account? Join the community.

Log In To Access Premium Features

Sign Up For A Free Account

Please enable JavaScript in your browser to complete this form.
Name
Newsletters