What I Believe About AI
I get asked about my position on AI often enough that it is worth writing down clearly.
I am not a skeptic. I am not a booster. I am a practitioner who has done the work firsthand and formed some hard opinions along the way. Here is where I stand.
AI is the finish line, not the starting line
Every serious AI initiative I have seen succeed was built on top of something that already worked: governed data, clear ownership, trusted processes, and a team that understood what it was automating and why.
Every serious AI initiative I have seen fail was built on top of disorder. The AI did not fix the disorder. It reflected it, faster and at greater scale.
The sequence is not optional. You get the data right. You govern it. You establish that humans can make sound decisions with it. Then you introduce AI to amplify that capability. Skip a step and you are not accelerating. You are compounding the problem you were trying to solve.
If you cannot extract your data, you do not own it
This is a governance principle that predates AI and applies with even more force now.
Data trapped inside vendor platforms is not your data in any meaningful sense. You can see it through dashboards. You can run reports. But if you cannot move it, analyze it, and govern it independently of that vendor’s roadmap and terms of service, you are a tenant, not an owner.
Before any AI conversation, the question I ask is simple: can we extract our data from every system we rely on, on our own terms? If the answer is no, that is the first problem to solve. Everything else follows from there.
AI amplifies what is already there, good and bad
AI is not a substitute for organizational competence. It is a multiplier.
Give it a clean, governed, well-understood process and it will make that process faster and more capable. Give it a broken, ungoverned, poorly understood process and it will automate the confusion and generate confident-sounding outputs built on a bad foundation.
This is why the human performance baseline matters. Before AI takes over any process, I want proof that people can run that process successfully with the data they have. If they cannot, AI is not the answer to that problem.
The gateway is how we do this right
The right infrastructure for enterprise AI is a governed platform layer that sits between users and models. A single point through which all AI capability flows, regardless of what is running underneath.
This gives the organization three things it cannot get from a collection of individually adopted tools: a consistent user experience, intelligent routing across models to manage cost and performance, and a single point of governance, auditing, and access control.
The default posture for that platform is on-premises. Not because cloud is wrong, but because sovereignty matters. We own the stack unless there is a specific, reasoned, documented case that we cannot. Cloud is an exception, not the default.
The genie is already out of the bottle
People in your organization are using AI right now. Not because IT approved it. Because it works and it is available. That is not going to stop.
The response to that reality is not to look away and it is not to issue a blanket prohibition that will be routed around in a week. The response is a bridging strategy: identify what is in use, evaluate it against a defined baseline, authorize a short list of managed tools, set clear guidance on appropriate use, and treat the whole thing as a temporary posture while the proper platform gets built.
Ignoring current usage is not governance. It is uninformed exposure with a policy veneer over it.
Governance before scale, every time
Guardrails are not a constraint on innovation. They are what makes innovation sustainable.
I have watched organizations move fast with AI and create problems that took far longer to clean up than the time they saved. Bias baked into decisions. Sensitive data leaked through unsanctioned tools. Outputs presented as authoritative that were generated without any review. None of this was malicious. All of it was avoidable.
Build the audit trail before you need it. Define access controls before something goes wrong. Be transparent about where AI influences decisions before someone asks and you cannot answer. The governance infrastructure has to go in at the start, not in response to damage.
This is augmentation, not replacement
AI handles repetition, synthesis, and first drafts at scale. Humans supply context, judgment, ethics, and accountability. The leverage is in the combination.
I am deeply skeptical of any AI strategy whose primary financial justification is headcount reduction. Not because efficiency does not matter, but because organizations that hollow out human capability in pursuit of short-term cost savings tend to discover, too late, that they automated the easy parts and lost the people who understood the hard parts.
Build AI into your organization to make your people more capable. Not to replace them before you understand what you are losing.
Begin now, build deliberately
The urgency is real. The organizations building a serious AI foundation today will have compounding advantages in two and three years that cannot be closed quickly. Waiting is a strategic choice with a cost attached to it.
But beginning now and building carelessly are not the same thing. The foundation has to be stable before anything of consequence runs on top of it. A careful build this year carries more weight in year three than a fast build that has to be unwound and redone.
Start. Go slow enough to get it right. Do not confuse motion with progress.
That is where I stand. Not complicated. Not hedged. Just what I have seen work and what I have seen fail, stated plainly.