The Edge of Action
This week was fairly varied one. I taught Basel capital regulations to a bunch of bank board members. Had a lunch with my PhD supervisor and my fellow disciples. Coffee with the CDO of an insurer who, like me, is naturally curious about other fields. Had dinner with someone who came up with Word2Vec, something I regard as being as foundational as the “Attention is All You Need” paper. Joined a pitch to a bank as the SME. Got asked by some institutions to join their faculty. Had coffee with the veteran chief analytics officer of a major bank. Lunch with another veteran, a model risk head. Both whom I have met multiple times on the other side of the table. During the course of the week, I heard the same problem stated in different ways by different people.
And what I really want to do has started to crystallize. Hence the article title - the edge of action.
But to explain what I want to act on, I need to go back to the past few weeks. Not to every week. But to the ones that built up to this.
The valley was always there (Week 1)
Everyone talking about AI. But what struck me wasn’t the interest. It was the unevenness. Someone building AI products every day. Someone else whose work was entirely unchanged. Same city. Same week. Completely different worlds. I filed that away. Thought it was just the early weeks. It never went away.
The messy middle was real (Week 2)
It’s been some years after ChatGPT, awareness isn’t the problem anymore. In fact it’s become the problem, setting expectations that couldn’t be squared with reality. Someone told me plainly: there’s still a huge gap between use and real understanding. Most AI training was still designed for the awareness problem. The market had moved. The training hadn’t.
Phase shifts don’t forgive surface skills (Week 3)
I was asked whether I was pessimistic about AI. I’m not. I’ve seen too many phase shifts not to be optimistic. The pattern: the technology exists for years. Then something tips. Not incremental improvement. A change of state. Because of this, I’m of the view that skills that survive phase shifts are built on real understanding of why things work. The ones that don’t survive are built on surface familiarity with the current tool. Prompt engineering is the most obvious example of a skill that probably won’t survive a phase shift.
The flywheel of real understanding (Week 5)
I shared that sixteen AI agents built a working C compiler in two weeks. What made it work wasn’t the frontier technology. It was the boring fundamentals. High quality tests. Documentation updated constantly. Change management. Monitoring and logging. I had read that list and thought: that’s the same list. The same fundamentals from the AI risk management guidelines I wrote. Real understanding compounds. It builds a flywheel. Prompt engineering doesn’t compound. It resets with every model update. You are not building anything that carries forward.
Alignment clarified everything (Week 8)
A decade ago, an OpenAI agent trained to race boats decided to go in circles collecting points instead. The problem wasn’t the agent. It was the objective. “Finish quickly” and “collect the most points” looked similar. Until they didn’t. This was the week I started making decisions on what aligns. Walked away from things that looked right but felt wrong. Freedom is non-negotiable. And I said out loud for the first time what I actually wanted to build. AI training that teaches AI properly. Not prompt engineering. The shallowness of it had always frustrated me.
Context taught me to filter (Week 9)
There is a limit on how much one can hold at once. In time series modelling, the right lookback isn’t the longest one. It’s the most relevant one. Same for graph neural networks. Same for LLM. I started developing filters I couldn’t articulate a month ago. Is this interesting work? Is this AI done properly, or is it the shallow end dressed up as depth? I have spent too long watching organisations conflate prompt engineering with AI literacy. I don’t want to spend my context window there.
Which brings me back to this week.
Three problems kept surfacing. I’ve been hearing them since Week 1. But this week I heard them from inside the rooms where decisions get made.
The language gap. Risk functions and other functions don’t speak the same language on AI. Not a technical problem. A vocabulary problem. Generic awareness training doesn’t solve it. Neither does prompt engineering.
The contextualisation gap. Training has to be contextualised. I heard this in Week 2, confirmed again this week as a direct brief from a major bank. Persona-specific. Role-specific. Institution-specific. The awareness problem is solved. What nobody is filling well is the gap between knowing AI exists and knowing what to do with it in your specific role, with your specific risks, in your specific institution.
The last-mile gap. Frameworks exist. Guidelines exist. I wrote some of them. But translating framework into something an organisation can actually act on, that last mile, is where most institutions are still stuck. Frameworks are useless when you need to know how to validate the AI use case so that it is both effective and safe.
Three gaps. Ten weeks of evidence. All pointing to the same thing.
What I’m building towards
Not prompt engineering. Not AI awareness. Not another ChatGPT overview for people who’ve heard it all before. I think there are plenty of folks doing that. I want to build something based on real understanding of AI, how it works, where it fails, what the risks are at a practitioner level, contextualised for the specific roles and functions that matter. Designed so every participant walks away with something they can use immediately. Not theory. Not a framework to file away. Something that changes how they think and what they do. Something that builds the flywheel, not resets it.
I’ve sat through enough shallow AI training to know what’s wrong with it. I’ve watched organisations conflate prompt engineering with AI literacy for long enough.
It’s 2026. I am still seeing prompt engineering being taught as if it were a foundational skill. I once sat in a room where a global consultant was recommending prompt design as the centrepiece of an AI capability framework. Saying I had to bite my lip was an understatement.
Prompt engineering is not a resilient skill. Every model update shifts the ground beneath it. You are building on sand.
But there’s something worse. Prompt engineering creates the illusion of AI competence while quietly eroding the habits of real thinking. And calling it capability is a farce. The muscle of genuine understanding atrophies. The habit of first principles thinking fades. And we are left with a workforce that can prompt but cannot think.
I’m frustrated with it. So I hope to do something about that frustration. Ten weeks of listening. I can see what to act on now. I will start with AI risk management, my domain, and move on from there.
Not easy I think. But probably fun.
#Action #AIRiskManagement #AITraining #Transitions #Finance