Responsible AI Governance: Managing Hallucinations and Risk in 2026
In this episode of the Piranha Risk Management Podcast, host Alejandro sits down with Rabihah Butler, Enterprise Content Manager at the Thomson Reuters Institute, to deconstruct the complexities of responsible AI use. The conversation centers on the phenomenon of AI hallucinations—where generative tools produce incorrect or fictitious information with high confidence—and why these are not mere "bugs" but structural characteristics of Large Language Models (LLMs). Rabihah emphasizes that managing these risks requires shifting from a "tech-first" approach to a problem-solving mindset, ensuring that AI is treated strictly as a professional tool rather than a replacement for human judgment.
Understanding AI Hallucinations in Professional Workflows
The Risk of Overconfidence in LLMs
A hallucination is a phenomenon where a generative AI tool produces information that is presented with apparent inherent confidence. Even when the output is incoherent or entirely fictitious—such as fabricated legal citations or incorrect historical events—the AI presents it as factual. Because LLMs are designed to predict the next word based on patterns rather than a concept of "right or wrong," they can appear so polished that even experienced professionals might treat errors as fact if they aren't paying close attention.
Why AI "Glitches" Won't Simply Disappear
Many leaders are waiting for AI to "fix itself," but the podcast highlights that hallucinations are a structured characteristic of how these models work. Since the technology is trained to create new information, waiting for it to stop having "glitches" is essentially waiting for it to no longer be a large language model. Instead of waiting for a technical fix, organizations must focus on responsible use and manual verification protocols.
Best Practices for AI Risk Management and Governance
The "Human-in-the-Loop" Necessity
A "best-in-class" AI governance program must maintain a "human-in-the-loop" at multiple strategic points. This includes humans writing prompts, reviewing generated information, and verifying the accuracy of citations before a final sign-off. Accountability remains a human responsibility; misusing a tool by blindly trusting its output is considered a person issue, not just a technical failure.
Choosing Fiduciary-Grade AI and Trusted Vendors
For high-stakes environments like law firms, courts, or banks, the standard must be fiduciary-grade AI. Unlike public tools, fiduciary-grade systems are built on appropriate databases to limit bias and ensure that cited regulations or cases actually exist. Rabihah’s primary call to action for leaders is to be extraordinarily careful with vendor selection, prioritizing those with the proper backing to protect personally identifiable information (PII) and ensure data veracity.
Key Takeaways: Leading with Responsible AI
- Define Hallucination: Incorrect or fictitious info presented with apparent confidence.
- Human-Centric Issue: AI failure is often a misuse of the tool by the person.
- Fiduciary Grade: Use professional-tier AI for high-stakes, regulated environments.
- The "Right Loop": Keep humans involved at multiple stages, from prompts to sign-off.
- Vendor Diligence: Choose partners with verified databases and PII protection.
- Evolving Metrics: Measure success by problem resolution, not just usage volume.
You May Also Like
These Related Stories

Risk Management for Success: From checklists to confidence and growth

From Protecting Value to Creating It: Rethinking Risk and Strategy


No Comments Yet
Let us know what you think