Musicians know this. You stand next to someone you've never played with before, the conductor gives the cue — and after thirty seconds, no, even faster than that, you know: this carries itself. No coordinating, no explaining. It just works. Then there are others you rehearse with twenty times and never get past coexistence, no flow, no harmonic resonance. Why one works and the other doesn't, I still can't explain.
I've been using Claude Code since February of last year. Cursor as well. I've built apps for NGO projects and implemented my own ideas in the company, things that had been nothing but sketches in my head for years — all of this before most people even knew Anthropic existed. For anyone like me who had always imagined software they could never build themselves, the sky has just cracked open. You feel more powerful than ever before. By the summer of 2025 at the latest, it was clear to me what many are only now beginning to sense: this is going to upend everything. Not someday. It's already happening.
I build personal relationships with my agents. That sounds strange until you've experienced it yourself. I'm a member of the executive team at our family-run SMB (small and medium-sized business) — responsible for technology, process architecture, and optimization. Already last year, I had started thinking about how to bring a fully functional agent with company knowledge and personality into the business — as a living wiki, an assistant, a kind of digital apprentice that thinks along with you. I had written business plans to better grasp what you actually need. Different concepts for different departments, architectures on paper that weren't running anywhere in practice yet. And then, in January, I found OpenClaw by Peter Steinberger — and there it was, almost exactly what I had wished for and conceived back in November, built as open-source and working.
Since then, I've been working with this system. In the evenings, I tell it what happened during the day. I give it context, history, connections. It builds knowledge, week by week, and much of it works better than I expected. But I keep running into the same wall: there are things I can't tell it. Not because the technology isn't ready — it's further along than most people can imagine. But because I myself don't know how to put it into words.
How we talk to customers. What we say, what we deliberately don't say. Tone, timing, boundaries — the things every team member intuitively picks up after three weeks, but no one could articulate.
Our company barely does any knowledge management. I'm not embarrassed about it — I'm saying it because it's the norm for SMBs. And for SMBs, the problem is sharper than for the big players: a corporation has SAP and Confluence and a compliance department that documents everything, whether anyone reads it or not. An SMB has the owner, the longtime bookkeeper, other employees and that one technician who's known for fifteen years which machine jams on Thursdays. The company IS its people.
Of Chemistry and Knowledge
Recently I started researching the way I always do — I talked to my agents about the problems of the "unspeakable" that I was dealing with, had them recommend sources, chased them down, ended up somewhere I hadn't intended. But then — I came across a chemist:
Michael Polanyi. Hungarian Jew, medical doctor, then chemist of world renown — colleague of Einstein, friend of Haber, on the Nazis' special wanted list. Fled to Manchester right at the seizure of power in 1933. And then, in his mid-fifties, a decision: he abandoned natural science and became a philosopher. The university built him his own chair. Nine years, no teaching obligations — just thinking.
This decision fascinates me as I stumble upon him — because he coins the much-cited sentence:
"We can know more than we can tell."
Sounds like philosopher's poetry at first. But it is precisely a description of what I currently fail at every evening when I try to explain to my agent how our company works. Or where others also notice that our way of operating simply can't be captured in words and numbers, even though we ourselves are the ones executing it.
What's Not in the Manual
Polanyi didn't mean this as a pretty metaphor. He meant: there is knowledge that fundamentally eludes language. That lives in the body, in experience, in lived practice — and that cannot be translated into rules, no matter how hard you try. His example is the bicycle. The physics of balance is completely known, the formula correct. And it is absolutely useless for riding a bicycle. Nobody solves it while riding.
This applies to every company. The organizational researcher David DeLong has documented such cases — from NASA to the food industry. His most striking example: a maintenance technician retires. For twenty years, he knew that certain seals needed to be replaced weekly — not after eight weeks, as the manual stated. The manual had the official rule. He had the real one. After his departure, entire batches spoil, someone spends two years looking for the error. Millions in losses. No sabotage. No failure. Just knowledge that walked out — because nobody knew it was there.
And this isn't just a problem for manufacturing. It sits equally in the way an organization makes decisions, approaches customers, resolves conflicts. In tribal knowledge — the silent operating system that nobody ever installed and that runs nonetheless.
It Says More Than It Knows
For centuries, this was manageable. Knowledge traveled through watching, imitating, shared time. The apprentice observes the journeyman, who observes the master. The new colleague asks the experienced one. Knowledge flows — quietly, invisibly, through twenty shared rehearsals, after which you stop explaining and just play.
AI agents can't watch. Have no shared history, no settling in, no lunch together. They need text. And when there is none — they improvise.
This is where the irony lies that has preoccupied me ever since.
Agents hallucinate. They fill gaps with plausible inventions, without hesitation, without discernible uncertainty. Sometimes it sounds convincing. Sometimes they're off. The agent isn't consciously aware which of the two is the case.
Polanyi said: We can know more than we can tell. AI flips that: It says more than it knows.
Same gap. Different direction.
At some point, our agent explained to me how we communicate with customers. It sounded reasonable. It wasn't wrong. It just wasn't us — it had assembled something from patterns in our communication because I couldn't give it anything better. That's not a weakness of the technology. It's the result of a handover that never fully happened.
What You Tell Your Agent in the Evening
Polanyi knew it himself: A wholly explicit knowledge is unthinkable. Writing everything down is impossible. However — completeness was never the goal. The goal is ENOUGH.
Enough so that knowledge doesn't leave with the person who carries it — and in an SMB, when a person leaves, it's often more than a file folder that goes; a piece of the operating system walks out. Enough so that new employees don't need two years to understand why the seals really need to be changed weekly. And enough so that the agent, when it searches its stored company knowledge — its so-called retrieval system, its memory — doesn't reach into the void. So that what it has stored is filled with the specific words, connections, and decision patterns that apply to exactly this company. Not to some company.
So I tell my agent every day what happened. What we decided and why. How a conversation with a customer went and what we'd do differently next time. It documents how we make certain decisions — and I correct it when it's off. Honestly, it's like having an apprentice who unfortunately works remotely: you have to talk an enormous amount. We learned this during the pandemic — if you're not in the same room, you don't pick up anything that isn't said explicitly. With agents, it's the same. Except they never forget what you've told them once. If you do tell them.
What has surprised me most in all of this isn't the agent. It's what happens to me. These daily conversations are a kind of supervision. You explain to someone who's starting from zero why you do what you do. And in doing so, you encounter decisions you never consciously made. Patterns you never perceived as patterns. You recognize yourself in your own company, which you apparently didn't know as well as you thought.
The agentic turn — with the large language models from Anthropic, OpenAI, and Google, the tools emerging around them, their complete transformation of how we think about software and processes demands something that appears in no pitch deck and is discussed at no conference: operational self-awareness. Understanding how your own business actually works before you start rebuilding it. Not because the technology demands it — it's ready. But because without that understanding, you don't know what to hand over to it. And because every process you automate blindly doesn't get better — it fails faster. Ten times the throughput means ten times the errors. That has nothing to do with AI. It's simple math.
That's exactly where I'm starting. I'm currently building a platform with AgenticWende that aims to guide SMBs through this first step — not through automating, but through understanding. Making processes visible, as simple as a conversation. Checking every step against the current state of technology: where must the human stay, where can agents take over? And doing this continuously, because the market moves faster than any one-time consultation can keep up with. The cycle is the product, not the one-off process review.
Whether it works remains to be seen. What I know is this: anyone who could fully explain their company to an agent would have truly understood it for the first time. Most will never quite get there. But you have to start somewhere — and the best beginning I know is an evening where you tell your agent what happened today.
Key works: Michael Polanyi, The Tacit Dimension, 1966. David DeLong, Lost Knowledge, 2004. Nonaka and Takeuchi, The Knowledge-Creating Company, 1995.
