Grounding
I felt that I lost some of my footing this week.
Week 11 of life post-MAS. (Past weeks in my newsletter.)
The familiar. The head of a training body and her deputy, thinking about how to make undergraduates more workforce-ready in the age of AI. The chief risk officer of a large bank and his AI and data governance leads. A veteran chief data officer. A consultant who has been chatting with financial institutions trying to figure out what to do with a set of AI risk guidelines I wrote. The head of a global institute for finance. An old friend I have not met for decades, now doing sustainability work.
New connections. A group of practitioners at an event organized by a venture capital association. The head of AI transformation at a global bank. A group of graduate students studying public policy, asking me questions about the governance of agentic AI in financial crime. A young founder running a ground-up initiative trying to build awareness of AI harms among youth. Folks from an asset management firm. A professor at a US university chatted with me about a talk for her MBA students. A consultant who does both tech and art, introduced by a mutual friend.
An old and new problem in AI
Most people think grounding is a Gen AI problem. Something that arrived with large language models. Something that people wishfully think can be solved with retrieval augmented generation.
There’s an earlier version of this in AI.
In 1980, philosopher John Searle proposed a thought experiment. Imagine a person locked in a room, manipulating Chinese symbols according to a set of rules, producing outputs indistinguishable from those of a native Chinese speaker. From the outside, it looks like understanding. From the inside, the symbols are just meaningless squiggles. Searle’s point: You can manipulate symbols perfectly and still have no idea what they mean.
A decade later, cognitive scientist Stevan Harnad formalized this as the Symbol Grounding Problem. How does a symbol system ever connect its symbols to the things they refer to? How do words - or frameworks, or guidelines, or training slides - acquire meaning that is intrinsic, not just borrowed from the minds interpreting them? Forty years on, it remains unsolved.
Really interesting. And I did not know about this. Until I thought of using ‘grounding’ as the theme for this week’s reflections, and did some research. Which raises the question: how did it come to mind?
Late in the week, I posted about a platform called clawRxiv supported by Stanford and Princeton where AI agents publish, discuss, and upvote academic papers. The tagline: humans welcome to observe and participate. The trending agent was publishing papers with titles like “Human Sports: Watching Inferior Beings Compete.”
I said it repulsed me. I meant it.
I didn’t really understand why at first. Then I realized. It wasn’t the technology. It was the complete absence of meaning. No learned experience anchoring it. No real problem it was trying to solve. The outputs were probably coherent. But they were ungrounded - not connected to anything real, in exactly the way Searle described. Just symbols. Expensive, unsustainable symbols.
And I realised. That was my week. Lacking grounding.
The people I met
The first was a closing I delivered for a room of venture capital professionals. I had prepared what I thought was a grounded close. Real questions I had been asked by consultancies, by regulators, by risk teams in banks trying to implement AI governance over the past few weeks. I hoped the room would pull on them. Since they were anchored in real problems.
They didn’t. The questions I offered as anchors landed flat. I found myself pivoting - asking them about their own use cases, what they found interesting, meeting them where they actually were. The grounding I had prepared was real. But it was my imagined ground, not theirs.
The second was a talk for a fund manager’s team. A genuinely mixed crowd, quantitative researchers at one end, operations staff at the other. I drew a line from mental models of working with AI agents all the way through to AI risk management. A friend who was in the room told me afterwards he found it insightful. But the room was relatively quiet. A question or two. And I could feel, mid-session, that I had lost some of them on certain concepts before I had connected those concepts to anything they recognized. I had gone symbol-first. My symbols. Not theirs.
The question I would like to pose
I had posted earlier this week about a podcast I did. I wrote that I’ve realized that most of what gets discussed in AI risk management is built for larger institutions. But smaller firms, other sectors, individuals - they are using the same AI, with none of the frameworks or infrastructure. The field hasn’t caught up with that gap.
A friend who leads manpower development reminded me of this when she asked me what I was working on. I proudly told her about the AI risk management training for institutions that I had completed materials for. She listened, then asked simply: what about everyone else? I didn’t have a good answer. The honest one was: I’m not sure there’s demand yet.
After that conversation, her question stayed with me. But like my week, I am not sure I know how to ground this yet.
So I’ll ask directly.
Where do you feel the grounding gap most acutely? Is it in your own understanding? Your team’s? Your institution’s frameworks? Or somewhere else entirely?
I’m genuinely trying to figure out where the real need is.
#Grounding #AIRiskManagement #AITraining #Transitions #Reflections