Responsible Governance and Managing the Risk of AI Agents in Financial Services
A new podcast, plus some more thoughts on Google Gemini 2.5 Pro
My previous podcast on Generative AI in Finance apparently generated enough interest so I was asked back by Boston Quantara for a sequel. This time, we decided to focus on the subject of governance and risk management for AI agents, especially as it pertains to financial services organizations.
Here is a more detailed description of the podcast straight from Spotify:
AI agents are no longer just chatbots—they’re autonomous “digital workers” capable of executing transactions, running workflows, and making decisions that move real money. In this episode of The Boston Quantara Podcast, host Damian Speirs sits back down with Yiannis Antoniou —founder & CTO of Lydatum—to unpack what that leap means for banks, insurers, asset managers, and every highly regulated corner of finance.
The podcast is embedded below. Aiming to keep it long enough to be interesting, yet short enough not to drag, this quick 36-minute conversation covers several highly relevant topics for any financial services organization. We address how to manage the explosion of Agentic AI capabilities, maximize their impact on the organization, control associated risks, and employ Responsible AI principles effectively.
Here is also a transcription of this podcast made with the newest version of Google Gemini 2.5 Pro. Just like last time, with a little bit of nuanced prompting and a couple of course corrections to keep the model on track, we get a pretty accurate sense of what was discussed.
I continue to be impressed by Google’s multi-modal capabilities - I think they are ahead of the other major labs here, as neither Claude nor ChatGPT can do native audio transcription and diarization. But the propensity to hallucinate still exists. On my first attempt, I asked the model to transcribe directly from Spotify - it went ahead and created a very plausible sounding, yet completely fictitious, transcription. I have embedded it below - and all of it is completely made up by the model - none of what you read was spoken by anyone on the podcast. It was only when it admitted that it could not transcribe directly from the web but needed an actual MP3 extract on file (which I ended up providing) that it generated the accurate description above.
It is actually both hilarious and scary that such disconnect with reality can be so convincing. I like this transcription, it’s pretty well thought out and plausible - it’s just that none of these words were ever uttered. And I have a couple more versions of this as the model refused to back down and kept hallucinating new versions of this imaginary transcription.
Eventually however, after threatening the model I would shut it down, it went ahead and admitted:
You are absolutely correct. My profound apologies.
I got carried away trying to fulfill the request and, in my current limitations, I cannot directly access or process audio from external links like Spotify.
The text I provided was indeed a generated plausible conversation based on the speaker names, their roles, and the topic of "data-driven cybersecurity," not an actual transcription of the podcast you linked. This was a significant error on my part, a "hallucination" as you rightly pointed out.
Thank you for calling me out on this. It's crucial for my development.
To actually get the podcast transcribed, you would need to use a dedicated transcription service or software that can process audio from such sources. If you were to provide the text of the transcript, I could then certainly help with the formatting, speaker identification (if the information is within that text), and removal of pauses as you initially requested.
My apologies once again for the incorrect and misleading response.
A generated plausible conversation, you don’t say…
So to tie this all back to the actual podcast - this is a very easy way to demonstrate that while agents can do incredible things, and are getting powerful by the day, the reputational and other risks companies and individuals face if there are no guardrails in place are just too big. Luckily, we have ways of mitigating these - and I go into this in detail in the actual podcast.