Skip to Content

From apprentice to algorithm: the story of trust to be built

3 March 2026 by
From apprentice to algorithm: the story of trust to be built
Ramcaly Tech, David Cresson

Introduction: the anxiety of letting go

A specter haunts the debate on Artificial Intelligence: the loss of control.

We are told that we are delegating more and more decisions to opaque systems. That no one, not even their creators, really understands how a large language model arrives at a particular answer. That we are handing over the keys to a machine whose inner workings we cannot inspect. The argument is powerful, visceral even. It touches on something deeply human: the fear of no longer being master in our own house.

But before succumbing to this anxiety, we need to ask ourselves an uncomfortable question: have we ever truly been in control?

Because if we pull on that thread, we quickly discover that the "loss of control" attributed to AI is not a rupture. It is the latest chapter in a very old story, a story that began the first time a human being entrusted a task to someone else.

1. The original delegation: trusting another mind

The craftsman and the apprentice

The very notion of delegation implies a form of relinquishment of control. When a Roman architect entrusted the cutting of stones to a team of masons, he could not check every chisel stroke. When a medieval merchant sent a ship loaded with spices to a distant port, he had to trust his captain, his navigator, and the integrity of his crew.

In each case, the one who delegates accepts a fundamental asymmetry: he no longer controls the details of the execution. The architect does not know whether the mason's gesture was precise; The merchant does not know if the captain has chosen the best route. They accept a gap between their intention and the reality of what is being executed.

This gap is not a defect. It is the very condition of any collective achievement. No cathedral was ever built by a single person who controlled every stone. No empire was ever administered by a single mind that understood every provincial decision. Civilization itself rests on our ability to delegate and therefore to accept a form of ignorance about the details of execution.

The safeguards of trust

But, and this is crucial, humanity has never delegated blindly. It has always built mechanisms to manage the uncertainty inherent in delegation.

The guild system, for example, was not just an economic arrangement. It was a trusted infrastructure. The apprentice spent years under the watchful eye of a master, who checked his work, corrected his mistakes and gradually expanded his autonomy. The "masterpiece", the work that gave the title of master craftsman, was an evaluation protocol: a formal proof of competence, judged by peers.

Contracts, seals, notaries, accounting standards, audits, all these institutions are, at their core, mechanisms for maintaining trust in delegation. They do not eliminate uncertainty; they manage it. They do not give the delegator complete control; they give him sufficient assurance that the result will be acceptable.

This distinction is crucial: control and trust are not the same thing. We rarely have complete control. What we need, and what we build, is justified trust.

2. The IT revolution: when complexity has overtaken the individual mind

From matter to code

Humanity did not wait for the computer to build systems exceeding the comprehension of a single mind. The cathedrals, suspension bridges or metropolitan networks of the early twentieth century were already feats of collective engineering, requiring vast chains of trust and rigorous controls at every stage.

However, the advent of complex software systems marked a decisive turning point of a different nature: we moved from material complexity to invisible complexity.

In the physical world, even when a project is gigantic, its components remain tangible. A civil engineer can inspect a weld, measure the tension of a cable, or test the strength of a concrete pillar. But how do you inspect a purely logical infrastructure?

When the first computer programs were written in the 1950s, a single programmer could still hold the entire logic in his head. He acted as the lone craftsman of this new digital world. He understood every instruction, every branch.

That era ended decades ago. A modern operating system, a banking platform, or the flight management system of an Airbus A380 contains tens of millions of lines of code. No human mind can visualize it in its entirety or predict all its possible states. Complexity has become abstract, moving and totally opaque to the naked eye.

And yet, we have not lost trust

Faced with this invisibility, have we "lost control"? In strict terms of overall individual understanding, yes. But in terms of controlling results, no. Planes fly, automated metros run and hospitals manage critical drug dosages thanks to these invisible codes.

We accept these systems not because an individual understands them from A to Z, but because the software industry has invented mechanisms to audit the immaterial. The master mason's visual control has been replaced by a logical layered insurance system.

Critical software is specified, peer-reviewed, automatically tested against thousands of scenarios, audited by static analysis tools, and monitored in real-time in production. None of these mechanisms require a single person to "understand everything." They work because they distribute the burden of proof and mechanically verify what humans can no longer see.

3. The architect's analogy: developing with a human or with an AI

The same problem, the same solution

Let's make this concrete with an analogy that will resonate with anyone who has worked in software development.

Let's imagine a software architect who needs to build a complex business application. He has two options: delegate the development to a human team, or delegate it to an AI development assistant.

In the first case, the human team, does the architect "control" the code? In a significant sense, no. He writes specifications, but he doesn't write every line himself. He trusts developers who make thousands of micro-decisions that he will never see: variable names, error management strategies, performance trade-offs.

To manage this uncertainty, the architect does not seek to micro-manage everything. Instead, it deploys a defense-in-depth system. Specifications seal the contract between intent and execution. Peer-reviewed code provides a critical outside eye, while automated testing and static analysis tools act as a safety net to hunt down flaws and regressions that the human eye might miss. Finally, documentation and monitoring in production ensure that the system remains understandable and verifiable in time. It is not an absolute control; it is an ecosystem of instrumented trust.

Now consider the second case: the architect delegates to an AI. The code is generated by a system whose internal reasoning is opaque. The architect doesn't know why the AI chose a particular data structure.

This seems alarming, but only until we realize that the situation is structurally identical to the first case. The architect does not need to understand the weights and biases of the neural network, any more than he needs to understand the synapses of a human developer. What he needs is the same verification ecosystem. The fundamental challenge of bridging the gap between intention and execution remains the same.

Human error versus "alien error"

If there is indeed a break, it is not in the principle of delegation, but in the nature of the error produced. And this is where our vigilance must adapt.

A human developer makes mistakes for understandable reasons: fatigue, inattention, cognitive bias, or misinterpretation of a business rule. His mistake has a logic, a human trace.

AI, on the other hand, generates what could be called an "alien error". Because its operation relies on statistical probabilities, predicting the most plausible sequence of symbols, and not on a physical or logical understanding of the world, AI can produce syntactically perfect, superficially brilliant, but semantically absurd code. It doesn't have that implicit "common sense" that holds a human back before committing an obvious aberration. It can invent a software library that doesn't exist or assemble concepts in a chimerical way, with the same aplomb as it would code a standard function.

These are challenges of a new kind. An experienced architect who receives code from a human knows what classic mistakes to look for. Faced with AI, he must learn to track down fluent hallucination: this structural error dressed up in formal perfection.

The skill of verification evolves. Safeguards must be tightened, automated tests made even more ruthless in the face of these new types of flaws. But the principle remains constant: adjust our verification processes, not abandon the entire framework.

4. The real danger: trusting without verifying

The paradox of poise

If the loss of control is not truly new, where does the real danger lie?

It does not lie in the delegation itself, but in our attitude towards it. More precisely, it lies in the unwavering aplomb with which AI generates its productions.

When a human colleague gives us an answer, we calibrate our trust based on a lifetime of social cues. We consider his expertise, his history, his hesitations, his body language. We know that the intern's peremptory assertion carries less weight than the senior expert's measured caution.

AI offers none of these signals of doubt. Its responses are uniformly polished, structured and definitive. A brilliant deduction and a total hallucination arrive in exactly the same tone, with the same apparent authority. This is what makes AI qualitatively different, not as a form of delegation, but in its ability to persuade, almost unintentionally.

The danger is not that we delegate to a system that we do not understand. We always have done that. The danger is that we delegate to a system that has the perfect poise of an infallible expert, and that we can, out of convenience or cognitive laziness, skip the verification steps that have always been the true guarantors of reliability.

The illusion of acquired understanding

This paradox of poise creates a second trap, just as formidable: the illusion of understanding.

When a human colleague grasps a complex concept in a meeting, we know that this understanding is acquired. If he succeeds brilliantly in the first step of a reasoning, there is very little chance that he will do exactly the opposite in the next step. With AI, we reflexively project this same cognitive continuity.

A first answer that is relevant, insightful and perfectly executed gives us a false sense of security. We tell ourselves that the machine "has understood" our vision or our project. Unconsciously, our vigilance is decreasing. We skim over the code or text generated during the following prompts, convinced that the tool is now "on the right track".

This is a fundamental error about the nature of the tool. AI does not have a persistent mental state. It doesn't "understand"; it predicts. With each new request, it statistically recalculates the most likely continuation. So it can produce a masterpiece of logic at 10:00 a.m., and absurdly contradict itself or forget its own premises at 10:01 a.m., while keeping exactly the same assurance. This inconsistency, masked by a form that is always perfect, pushes us to let our guard down at the precise moment when the system demands constant attention.

The space rocket analogy

Space exploration offers the most absolute illustration of this principle of verified trust.

When an astronaut settles into the capsule of a rocket, he literally entrusts his life to a machine whose complexity exceeds the comprehension capabilities of a single human mind. The design of the launcher, trajectory algorithms and life support systems mobilized thousands of hyper-specialized engineers.

The astronaut does not know the rocket's design down to the smallest detail, nor the exact molecular composition of each alloy. Why then does he agree to take off? Because he has an unwavering confidence in the evaluation protocol. He knows that each component has been tested individually, and then tested once assembled, until the overall validation of the vehicle. He accepted the technical opacity of the micro-details in exchange for total transparency on the safeguards.

The real danger of Artificial Intelligence lies in our unconscious refusal to apply this same rigor.

Because AI presents itself to us in the harmless and familiar guise of a conversation, it masks its immense complexity. We deploy opaque algorithmic systems with the casualness of someone riding a bike, blithely skipping the verification steps. However, we should treat them with the same procedural discipline as a space launch.

AI does not require us all to become engineers who can inspect the mathematical parameters of the language model. It requires that we remain rigorous professionals, refusing to get a project off the ground until the evaluation protocols have validated the result.

Where we fail

And this is where our current relationship with AI is truly concerning. Not because technology is inherently uncontrollable, but because many of us are not yet applying the verification disciplines that technology demands.

A student who submits an AI-generated essay without verifying the claims is not a victim of a "loss of control". He is someone who skipped the proofreading stage. A company that deploys an AI-driven decision-making system without testing it against edge cases does not face an unprecedented philosophical dilemma. It makes the same mistake as a company deploying untested software, a mistake we've known how to avoid for decades.

The toolbox exists. Specifications, reviews, tests, documentation, monitoring, these are not new inventions. They are established disciplines that need to be adapted to a new context, not reinvented from scratch.

Conclusion: control is a practice, not a state

The story of the "loss of control" is comforting in its simplicity. It places us as passive victims of an inscrutable technology. But it is historically inaccurate and, worse, it is paralyzing.

Humanity has never operated from a position of total control. Every significant achievement in our history has required delegation, and delegation has always meant accepting a gap between our intentions and the details of their implementation. What has changed over the centuries is not the size of this gap, it has always been vast, but the sophistication of the mechanisms we have built to bridge it.

From the guild master inspecting an apprentice's work, to the reviewer scrutinizing a pull request, to the validation of a critical system, the principle is invariant: we don't need to understand every detail of the process to trust the result. We need to check the result itself.

AI does not break this principle. It reinforces the urgency of it.

The real question is not whether we are losing control. It is whether we are rigorous enough in building the trust frameworks that any powerful tool requires. The architect who receives code from an AI must apply the same discipline as the architect who receives code from a developer: specify clearly, proofread rigorously, test relentlessly, document systematically, continuously monitor.

The loss of control is not a technological inevitability. It is a human choice, the choice to trust without checking, to accept without questioning, to delegate without ensuring follow-up. And this is a choice we can refuse to make.

The tool has changed. Discipline must persist and even strengthen more than ever.

From oral tradition to artificial hallucination: the odyssey of sharing knowledge (Part II)
AI: best ally or worst enemy of knowledge?