The Marty McFly Problem
There is a scene in Back to the Future Part III where Marty McFly picks up a gun at a shooting gallery and fires with startling accuracy. He has never drawn on a man. He has never trained under pressure. But he has logged serious hours on a video game called “Wild Gunman,” and his hands know what to do. The crowd is impressed. So is Marty.
It is a great scene. It is also a terrible talent development model.
I keep thinking about it when I watch organizations hand AI tools to early-career professionals and call it a training program. The logic follows the same shape: exposure to a simulation produces capability; the simulation is increasingly realistic; therefore the capability is real. But the scene plays for laughs precisely because everyone watching knows the gap between arcade reflexes and the genuine article. Marty eventually has to face down a real gunfighter, and the stakes are not pixels.
The Confusion Between Output and Competence
The problem is not that AI tools are unhelpful. They are genuinely useful, and any organization that refuses to integrate them is making a different kind of mistake. The problem is the conflation of assisted output with developed skill. When a junior analyst uses a language model to structure an argument, the deliverable looks like the work of someone who can structure arguments. When they use it to summarize a dataset, the output looks like the work of someone who understands data. The organization sees the output, evaluates it positively, and draws the wrong conclusion about what capability now exists in the room.
This is not a new failure mode. It rhymes with every generation of tooling that promised to close the gap between novice and expert. Spreadsheets did not produce financial analysts. Search engines did not produce researchers. Both tools made certain tasks faster for people who already understood the underlying domain. For people who did not, they produced a convincing imitation of understanding, durable only until the moment the tool failed or the question got hard.
What AI tools have done is raise the ceiling on that imitation considerably. The output is more polished. The gap is harder to see from the outside, and sometimes from the inside. A junior employee who genuinely cannot assess the quality of a model-generated analysis has no reliable way to know when they are holding something sound versus something that reads well and is wrong. That is a liability the organization is carrying, usually without naming it.
What Foundation Actually Means
Foundation is not familiarity with concepts. It is the accumulation of decisions made under conditions where the feedback was real and the consequences were yours. A finance professional who has rebuilt a broken model at midnight before a board meeting understands something about model construction that no simulation teaches. An operations leader who has managed a vendor relationship through a failure knows how to read the early signals that a relationship is deteriorating. That knowledge is not in the documentation. It lives in pattern recognition built from experience that had stakes.
Early career years are the window when that foundation gets laid. It is the period when the organization should be deliberately engineering exposure to hard problems, to failure with limited blast radius, to feedback loops that are honest rather than diplomatic. It is when junior professionals should be doing things badly enough, often enough, to develop genuine judgment about what good looks like. That process is slow and uncomfortable and irreplaceable.
When AI tools absorb the friction of that period, they do not accelerate development. They defer it. The organization ends up with a cohort that has produced a lot of polished work and developed very little of the judgment required to produce polished work without assistance. The risk surfaces later, in the moments that require independent reasoning under pressure, which is precisely when leadership needs the bench depth to be real.
The Leadership Decision
This is not an argument for withholding useful tools. It is an argument for being precise about what the organization is actually building. There are two different investments that can be made with early career talent, and they produce different outcomes over a five-year horizon.
The first invests in throughput. Junior professionals are equipped with every available tool, output volume rises, and short-term delivery capacity improves. The organization looks more productive. The hidden cost is that the talent pipeline is being filled with people who are dependent on instruments they cannot critically evaluate, in roles that will eventually require them to exercise judgment the instruments cannot provide.
The second invests in capability. Tools are available, but the organization also engineers deliberate exposure to problems that require unassisted thinking, to feedback that is specific and honest, and to failure early enough that it builds rather than breaks. Throughput in the short term is modestly lower. The pipeline in year four is substantially stronger.
Most organizations are defaulting to the first without deciding to. The tools are adopted because the output looks better and the productivity numbers are real. Nobody is making an explicit choice to defer capability development. It is happening as a consequence of optimizing for something visible over something that matters more.
The question worth bringing to a leadership team is not whether to use AI tools. That decision is largely made. The question is whether the organization has a deliberate point of view on what it is building in its people, and whether the current environment is actually producing that. Marty got lucky. He had Doc Brown, a time machine, and a screenplay. The organizations betting that tool fluency will mature into genuine judgment are carrying a different kind of risk, and the reckoning does not come with a third act.