<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://peter.zaffina.net/feed.xml" rel="self" type="application/atom+xml" /><link href="https://peter.zaffina.net/" rel="alternate" type="text/html" /><updated>2026-04-03T22:28:12+00:00</updated><id>https://peter.zaffina.net/feed.xml</id><title type="html">Peter Zaffina</title><subtitle>I work with people to connect them with their data.</subtitle><entry><title type="html">Support Is a Structure, Not a Feeling</title><link href="https://peter.zaffina.net/blog/New-Outlook-Process/" rel="alternate" type="text/html" title="Support Is a Structure, Not a Feeling" /><published>2026-04-02T00:00:00+00:00</published><updated>2026-04-02T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/New-Outlook-Process</id><content type="html" xml:base="https://peter.zaffina.net/blog/New-Outlook-Process/"><![CDATA[<p><a href="https://peter.zaffina.net/blog/New-Outlook/">Part one</a>  was about what broke and what shifted. This is about what I actually did.</p>

<p>I want to be specific, because “I did the work on myself” is the least useful sentence in the leadership vocabulary. It signals virtue without conveying information. What follows is the actual inventory: the people, the practices, and the tools that helped me get from where I was to somewhere more functional.</p>

<h2 id="tribalnet-and-the-ceiling-question">TribalNet and the Ceiling Question</h2>

<p>One of the things I was sitting with was a question I kept circling without naming directly: is there a ceiling here, and if so, is it structural or is it me?</p>

<p>Robert Castle writes about a tech ceiling that exists at the director level, where technical credibility stops being the primary currency and political and relational fluency take over. I had read his work before. I read it again with different eyes. The distinction he draws is not that technical leaders lack the capacity to lead at higher levels. It’s that the game changes in ways that aren’t always made visible, and many people hit the ceiling without ever being told the rules shifted.</p>

<p>TribalNet gave me a different kind of data. Being in a room with peers across tribal governments doing serious technology leadership work recalibrated my sense of what’s possible in this sector. It reminded me that the ceiling I was bumping against was contextual, not permanent. That matters. Despair is often a failure of context.</p>

<h2 id="over-functioning-is-a-choice-a-bad-one">Over-Functioning Is a Choice. A Bad One.</h2>

<p>My friend’s question stayed with me longer than almost anything else from those months: why are you over-functioning to hold this together?</p>

<p>The honest answer was that I had confused effort with agency. If I worked hard enough, stayed late enough, carried enough of the load, I could offset structural constraints. That is not how organizations work. Systems are designed a certain way, for reasons that accumulated long before I arrived, and the design doesn’t bend because one person pushes harder. What actually happens is that the over-functioner absorbs the cost of a broken system and makes it invisible to the people who could fix it.</p>

<p>I stopped. Not all at once, but deliberately. I let the system show its own constraints instead of buffering them. That was uncomfortable. It was also the right call.</p>

<h2 id="the-people-who-showed-up">The People Who Showed Up</h2>

<p>Support is a structure, not a feeling. I had to build it deliberately, and I want to name what it actually looked like.</p>

<p>The foundation was my wife. Before any framework, any mentor, any peer conversation, there was someone at home who was willing to hear me out without an agenda. She listened when I needed to say things out loud before I knew what I actually thought. She kept me grounded when my perspective was drifting, and real when I was at risk of either catastrophizing or minimizing. That combination, genuine presence and honest calibration, is not something you can replicate anywhere else in a support network. Everything else I’m about to describe was built on top of that.</p>

<p>The next layer was a former military colleague, someone who served alongside me when I was a Signal Officer through the kind of shared experience that doesn’t require explanation later. There is a particular kind of trust that comes from having been in difficult circumstances together, and that trust doesn’t expire. I could be completely honest with them in ways that required no framing or context-setting. Bringing deep experience, real perspective, and the kind of grounded Christian character that makes someone a reliable voice when the noise is loudest. They have seen enough of life and leadership to know the difference between a hard moment and a defining one.</p>

<p>The third layer was a former boss who became something more durable than a professional reference. The transition from reporting relationship to genuine mentorship doesn’t happen automatically. It requires the other person to stay invested after the org chart stops requiring it, and it requires you to be willing to be known rather than just managed. I was fortunate to have that. Someone who had seen me work, understood my strengths clearly, and had enough history with me to call things accurately when I couldn’t. They knew the difference between a hard season and a pattern worth worrying about, and they helped me tell the difference too. That grounded, longitudinal perspective is something you cannot manufacture. You earn it over time with someone who chooses to stay in the conversation.</p>

<p>The fourth layer was an external mentor with no connection to my sector, my organization, or the specific dynamics I was navigating. A senior executive with decades of experience across a very different field, which turned out to matter more than I anticipated. The distance was the point. They had no stake in the politics, no history with the personalities, no reason to manage what they said to protect a relationship inside the system. They could be a genuine sounding board in a way that proximity makes impossible. When you are in the middle of simultaneous personal and professional turmoil, that kind of external anchor is not a luxury. It’s load-bearing. They brought pattern recognition at a level I couldn’t access from inside the situation, and they did it without rushing me toward a conclusion.</p>

<p>The fifth layer was peers. I want to be precise here, because my experience with peers during this period was genuinely good, and I don’t want to frame it as something it wasn’t. I’m not wired for competitive peer dynamics. West Point has a phrase for it: cooperate and graduate. That ethos was serious, and it stayed with me. What I recognized during this season was how precious that absence of competition actually is. Peers made time. They shared what they were actually carrying. The exchange was honest and mutual in a way I didn’t take for granted, because I know enough to know that isn’t universal.</p>

<h2 id="voices-worth-following">Voices Worth Following</h2>

<p>Several names appear in my notes because they showed up at the right time.</p>

<p>Christopher Voss on negotiation gave me language for what was happening in conversations where I felt unheard. His framing, that being heard is a precondition for being understood, helped me stop trying to win arguments and start trying to create the conditions where real communication could happen. It’s a subtle shift with significant operational consequences.</p>

<p>Jefferson Fisher on communication did something different. Where Voss gave me strategy, Fisher gave me the moment-level mechanics: how to slow down inside a hard conversation, how to respond instead of react, how to hold your ground without escalating. His core argument is that the person who controls the pace controls the conversation. That landed for me as a leader who was, at the time, letting urgency dictate my register in situations that required steadiness. The practical application is unglamorous. Pause longer than feels natural. Say less than feels necessary. Let silence do work you’ve been trying to do with words.</p>

<p>Mel Robbins advice on “Let them” gave me something more basic but no less useful: a way to move when the emotional weather was bad and movement felt impossible. The framework is almost embarrassingly simple. It works anyway.</p>

<p>Vanessa Van Edwards on human behavior helped me read the room more accurately. When you’re under sustained stress, your pattern recognition degrades. Having a more structured framework for reading people helped me stay calibrated in situations where I was most likely to misread them.</p>

<h2 id="writing-as-processing">Writing as Processing</h2>

<p>I started writing not to publish, but to think. There’s a difference, and it matters.</p>

<p>When I’m working through something difficult, my interior monologue is fast and circular. Writing forces linearity. It requires me to put one thought after another in sequence, which means I have to figure out what actually comes first. That discipline surfaces things that looping internal dialogue never does.</p>

<p>I also started naming emotions specifically, as a practice rather than an instinct. Not “I’m stressed” but something more precise: I’m anxious about a decision I can’t influence. I’m angry about something that already happened. I’m grieving a path that closed. The specificity matters because the response to each of those is different. Treating all of them as “stress” and pushing through is how you end up in the police station with Jeanie Bueller.</p>

<p>The gratitude journal is the piece of this that sounds most like a self-help cliche. I’m including it anyway because it worked. Not in a motivational-poster way. In a neurological-recalibration way. The brain under sustained pressure filters for threat by default. Deliberately naming what is working, what is good, what is present, counteracts that filter. You don’t have to believe in the mechanism for the mechanism to function.</p>

<h2 id="what-this-actually-adds-up-to">What This Actually Adds Up To</h2>

<p>None of these tools is sufficient on its own. That’s the point. What held me together was a system, not a single intervention. Scripture and Stoicism gave me a philosophical frame. Honest peers gave me accurate feedback. Specific thinkers gave me operational vocabulary for specific problems. Writing gave me a processing practice. The journal gave me a counterweight to threat-filtered thinking.</p>

<p>The combination is not a program you can package. It’s a set of deliberate choices made under pressure about what to pay attention to and what to put down.</p>

<p>I’m still inside the organization I was inside when this started. The constraints haven’t changed. What changed is that I’m no longer paying an emotional tax on things I cannot move, and I’m no longer absorbing the cost of structural problems to keep them invisible.</p>

<p>That is not a small thing.</p>]]></content><author><name></name></author><category term="Leadership" /><category term="Personal" /><summary type="html"><![CDATA[Part one was about what broke and what shifted. This is about what I actually did.]]></summary></entry><entry><title type="html">The Cerulean AI Architecture: Why Ready is the Only Production Standard That Matters</title><link href="https://peter.zaffina.net/blog/Cerulean-AI/" rel="alternate" type="text/html" title="The Cerulean AI Architecture: Why Ready is the Only Production Standard That Matters" /><published>2026-03-31T00:00:00+00:00</published><updated>2026-03-31T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/Cerulean-AI</id><content type="html" xml:base="https://peter.zaffina.net/blog/Cerulean-AI/"><![CDATA[<p>In the iconic cerulean scene from <em>The Devil Wears Prada</em>, Miranda Priestly delivers a monologue that is often misread as a personality study. For engineering leaders, however, it is a structural study. When she demands to know why no one is ready, she is not asking for more effort. She is highlighting a catastrophic failure in upstream visibility and technical lineage.</p>

<p>While the industry debates which LLM has the best chat interface, technical leaders like CTOs, VPs of Engineering, and Staff Engineers are asking a much harsher question: Why is our infrastructure not ready for the shift? We often treat AI implementation like Andy Sachs treats her lumpy blue sweater, viewing it as a casual, filtered-down byproduct of a trend. But in a high-stakes engineering organization, there is no such thing as a casual byproduct. Everything has a lineage. If you do not understand the cerulean origins of your stack, you are not leading. You are just consuming.</p>

<h2 id="the-myth-of-just-tools-and-the-cost-of-abstraction">The Myth of Just Tools and the Cost of Abstraction</h2>

<p>The scene begins with Andy scoffing at two seemingly identical belts, dismissing the high-stakes meeting as stuff. In engineering, we see this same dismissiveness toward the plumbing of AI. Skeptics and even some hurried managers view models as mere APIs or black boxes. They see interchangeable stuff that can be swapped without consequence.</p>

<p>This is a failure of systems thinking. Miranda’s response is a brutal breakdown of first-hand expertise. She does not see a blue sweater. She sees a specific technical lineage. She traces the color from a 2002 Oscar de la Renta collection to Yves Saint Laurent military jackets, explaining how it filtered down to the bargain bin.</p>

<h3 id="the-engineering-parallel-data-lineage-and-model-provenance">The Engineering Parallel: Data Lineage and Model Provenance</h3>

<p>In the world of AI-driven engineering leadership, cerulean is your training data lineage.</p>

<ul>
  <li><strong>The Lumpy Blue Sweater:</strong> Using a wrapper around a generic API with no thought to data privacy, token costs, or latency.</li>
  <li><strong>The Cerulean Origin:</strong> Understanding the specific transformer architecture, the weight quantization, and the ethical provenance of the datasets that power your specific implementation.</li>
</ul>

<p>When an Engineering Manager says it is just an LLM integration, they are Andy Sachs. They are ignoring the multi-billion dollar pipelines, the GPU scarcity, and the architectural decisions made years ago that allow that tool to exist at their fingertips. To be ready is to respect the complexity of the stack below the abstraction layer.</p>

<h2 id="why-is-no-one-ready-for-the-production-shift">Why is No One Ready for the Production Shift?</h2>

<p>Miranda’s frustration about why it is impossible to put together a decent run-through is the cry of a leader who understands that <strong>innovation is a function of preparation.</strong></p>

<p>In a technical organization, being ready for AI does not mean having a subscription. It means your data strategy was sound three years ago. It means your CI/CD pipelines can handle non-deterministic outputs. It means your run-through, or your staging environment, is actually representative of the complexity of the real world.</p>

<h3 id="the-three-pillars-of-technical-readiness">The Three Pillars of Technical Readiness</h3>

<ol>
  <li><strong>Architectural Intentionality:</strong> Just as Andy’s sweater was selected for her by the people in that room, the models we use are curated by architectural decisions. If you are using a model because it was the easiest to integrate, you have not made a choice. You have accepted a default. A leader knows the difference between a deliberate architectural trade-off and a lazy one.</li>
  <li><strong>Depth of Domain Expertise:</strong> Miranda distinguishes between lapis, turquoise, and cerulean. A technical leader must distinguish between Retrieval-Augmented Generation (RAG), Fine-tuning, and Prompt Engineering. If you treat these as basically the same thing, your run-through will fail when it hits the edge cases of production.</li>
  <li><strong>Economic Impact Awareness:</strong> Miranda points out that a single color represents millions of dollars and countless jobs. Similarly, suboptimal AI deployments introduce significant enterprise risk, ranging from unsustainable operational overhead to the legal and reputational liabilities associated with algorithmic hallucinations.</li>
</ol>

<h2 id="moving-beyond-the-lumpy-blue-sweater-of-engineering">Moving Beyond the Lumpy Blue Sweater of Engineering</h2>

<p>The problem with many engineering teams today is that they are wearing AI without understanding it. They are implementing features because they filtered down from a board meeting or a competitor’s press release.</p>

<p>To lead through this revolution, we must move past the casual corner of technology. This requires a shift from being a consumer of tools to a steward of systems.</p>

<h3 id="1-audit-your-lineage">1. Audit Your Lineage</h3>

<p>Where did your data come from? If you are using RAG, how clean is your vector database? If your team cannot trace the lineage of an AI-driven decision back to its source data, you are operating on a lumpy blue foundation. You are at the mercy of the people in the room who made the choices for you.</p>

<h3 id="2-standardize-the-run-through">2. Standardize the Run-Through</h3>

<p>In the film, the run-through is a high-stakes review of the upcoming issue. In engineering, this is your Observability and Evaluation (Eval) framework. If you do not have a robust way to evaluate model performance, bias, and drift, you are not ready for production. You are just hoping that the belts look different enough to work. High-performing teams build eval-driven development cycles where the metrics are as sharp as a fashion editor’s critique.</p>

<h3 id="3-reject-technical-dismissiveness">3. Reject Technical Dismissiveness</h3>

<p>When a senior engineer says it is just a chatbot, they are missing the cerulean point. They are ignoring the massive shift in the compute-to-value ratio. As a leader, your job is to bridge that gap. You must explain that while the interface might look simple, the infrastructure required to make it reliable, scalable, and secure is a monumental engineering feat.</p>

<h2 id="the-cost-of-not-being-ready">The Cost of Not Being Ready</h2>

<p>Miranda Priestly represents an unforgiving standard of excellence. The market acts exactly like her. The market does not care if you find the new AI paradigms confusing or baffling. It only cares if your product works, if it is cost-effective, and if it is first to market with quality.</p>

<p>When the market asks why no one is ready, it is calling out the gap between those who participate in a trend and those who actually understand the mechanics of it.</p>

<p>As AI continues to trickle down into every legacy system and new repository, the question remains: Are you just wearing the lumpy blue sweater of technology, or do you understand the cerulean origins, the architectural costs, and the rigorous intentionality required to actually deploy?</p>

<p>In a world full of teams who think the belts of different models all look the same, the leaders who survive will be the ones who know exactly why they are different.</p>

<hr />

<p><strong>What does your run-through look like? How are you evaluating the technical lineage of the AI tools currently in your stack?</strong></p>]]></content><author><name></name></author><category term="AI" /><category term="Artificial Intelligence" /><summary type="html"><![CDATA[In the iconic cerulean scene from The Devil Wears Prada, Miranda Priestly delivers a monologue that is often misread as a personality study. For engineering leaders, however, it is a structural study. When she demands to know why no one is ready, she is not asking for more effort. She is highlighting a catastrophic failure in upstream visibility and technical lineage.]]></summary></entry><entry><title type="html">Set-Based Design Was Always the Right Answer. Now It’s Also the Affordable One.</title><link href="https://peter.zaffina.net/blog/SBCD/" rel="alternate" type="text/html" title="Set-Based Design Was Always the Right Answer. Now It’s Also the Affordable One." /><published>2026-03-29T00:00:00+00:00</published><updated>2026-03-29T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/SBCD</id><content type="html" xml:base="https://peter.zaffina.net/blog/SBCD/"><![CDATA[<p>The core problem in complex engineering programs is not technical. It is a knowledge problem. Organizations commit to a design direction before they understand it well enough to commit, and they spend the rest of the program paying for that mistake. Cost overruns, schedule pressure, and late-stage redesigns are rarely failures of execution. They are the bill that arrives when you locked in too early.</p>

<p>Set-based concurrent design addresses this directly. Rather than selecting a single design direction early and iterating on it, teams carry multiple viable alternatives forward in parallel, narrowing the set through evidence rather than authority or assumption. The commitment comes late, when the knowledge base justifies it. The result is a design process that finds better solutions and surfaces integration conflicts while the cost of resolving them is still low.</p>

<h2 id="what-the-practice-actually-does">What the Practice Actually Does</h2>

<p>The logic comes out of Toyota’s product development system, where engineers mapped feasibility boundaries across multiple design candidates simultaneously. Multiple functions evaluate the set concurrently: structures, thermal performance, manufacturability, cost, reliability. Each function is narrowing the set from its own vantage point. What survives is not the option that won a room, but the option that survived rigorous parallel scrutiny from every direction that matters.</p>

<p>In software and systems design, the same logic applies. Multiple architectural approaches are developed far enough to be evaluated on real criteria before one is selected. This feels inefficient from the outside. It is the opposite. The apparent redundancy of exploring multiple paths is cheap compared to the cost of discovering late that the single path you committed to cannot meet a requirement it was never properly tested against.</p>

<p>The financial case has always been straightforward. The further into a program a design flaw travels before discovery, the more expensive it becomes to correct. Simulation studies and concurrent trade-space analysis cost money upfront. Redesigns after validation, after procurement, or after deployment cost multiples of that, often with schedule consequences that dwarf the direct cost.</p>

<h2 id="the-economics-just-changed">The Economics Just Changed</h2>

<p>The honest limitation of set-based design was always resource intensity. Running parallel design streams requires parallel engineering capacity. More models, more analysis, more people working simultaneously across more options. For most organizations, that arithmetic pushed them toward point-based design even when leadership understood the risk. SBCD was largely available to programs where failure cost was high enough to justify the investment, think aerospace, defense, and major automotive platforms.</p>

<p>Agentic AI breaks that constraint. The capacity to run concurrent design exploration across a broad solution space no longer requires proportional headcount. AI agents can execute simulation loops, evaluate design candidates against multiple performance criteria, identify constraint violations, and prune infeasible options at a speed and cost that no engineering team can match manually. What previously required months of parallel work across multiple teams can be compressed into cycles that run in days.</p>

<p>This is not AI replacing engineering judgment. It is AI doing the computational labor that made set-based exploration expensive, so that engineering judgment can operate on a richer, better-filtered set of options than was previously affordable.</p>

<h2 id="what-this-means-for-capital-allocation-and-program-risk">What This Means for Capital Allocation and Program Risk</h2>

<p>For a board or executive team, the relevant question is not how the technology works. It is what changes about program risk and capital exposure when you can run many more design iterations in simulation before committing to physical development or organizational rollout.</p>

<p>The answer is that the distribution of outcomes improves substantially. More design space explored means fewer surprises late. Fewer surprises late means fewer unplanned capital calls, fewer schedule extensions, and fewer situations where a program must absorb a costly redesign or, worse, proceed with a known compromise because reverting is no longer feasible.</p>

<p>Organizations that have absorbed a major late-stage redesign know what that actually costs. It is rarely just the direct engineering expense. It is the downstream effects on procurement, on customer commitments, on team confidence, and on the opportunity cost of leadership attention consumed by recovery rather than growth. Set-based design, executed with agentic AI doing the heavy exploration work, is a structural reduction in the probability of that outcome.</p>

<h2 id="the-decision-in-front-of-leadership">The Decision in Front of Leadership</h2>

<p>The practice of carrying multiple design alternatives forward until evidence justifies convergence is not new. The evidence for its effectiveness in complex programs is well established. What is new is that the primary economic objection to doing it well has largely dissolved.</p>

<p>Programs that invest in broad simulation-based exploration before committing to a design direction will reach commitment with more confidence, fewer embedded compromises, and a substantially lower probability of the kind of late discovery that rewrites a program’s financial profile. That is a risk-adjusted argument, and it is a strong one.</p>]]></content><author><name></name></author><category term="AI" /><category term="Design" /><summary type="html"><![CDATA[The core problem in complex engineering programs is not technical. It is a knowledge problem. Organizations commit to a design direction before they understand it well enough to commit, and they spend the rest of the program paying for that mistake. Cost overruns, schedule pressure, and late-stage redesigns are rarely failures of execution. They are the bill that arrives when you locked in too early.]]></summary></entry><entry><title type="html">AI Agents Won’t Fix What Bad Management Already Broke</title><link href="https://peter.zaffina.net/blog/Agents-Amplify-Management/" rel="alternate" type="text/html" title="AI Agents Won’t Fix What Bad Management Already Broke" /><published>2026-03-26T00:00:00+00:00</published><updated>2026-03-26T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/Agents-Amplify-Management</id><content type="html" xml:base="https://peter.zaffina.net/blog/Agents-Amplify-Management/"><![CDATA[<p>Before you wire an AI agent into your operations, answer a simpler question: how do you run work today? Not aspirationally. Not as it appears in a slide deck. How does a unit of work actually get assigned, tracked, reviewed, communicated, and closed? If that question produces hesitation, that hesitation is the real problem.</p>

<p>AI agents are capacity multipliers. That is their value and their risk in the same sentence. They amplify whatever management system they operate inside. Bring them into a disciplined operation with clear accountability structures, documented workflows, and meaningful performance signals, and they will accelerate output. Bring them into ambiguity, and they will produce more of it faster.</p>

<h2 id="the-question-is-not-the-technology">The Question Is Not the Technology</h2>

<p>The questions worth asking are older than any software vendor: Who owns this work? What does done look like? How does quality get verified before something moves downstream? How does a manager know, on any given day, whether the team is ahead or behind? These are not AI questions. They are management questions. Answering them is what makes an organization ready to extend capability through agents, or any other tool.</p>

<p>Most organizations have some version of these answers, but not consistently and not at a level of precision that a system can act on. Work gets assigned informally. Quality is judged by whoever happens to review it. Capacity is estimated by feel. ROI is claimed after the fact. That is manageable, barely, when the humans doing the work can compensate with judgment and context. Agents do not compensate. They execute.</p>

<h2 id="automate-a-bad-process-and-you-get-a-faster-bad-process">Automate a Bad Process and You Get a Faster Bad Process</h2>

<p>I spent years as a Lean Six Sigma Black Belt before moving into technology leadership. The principle I returned to constantly was this: if you rush to automate a broken process, or no process at all, you do not solve the problem. You make the problem faster and bigger. That lesson did not age out when AI arrived. It became more important.</p>

<p>The operational discipline that LSS demands, clear process definition, measurable inputs and outputs, variance reduction, ownership at every handoff, is exactly the discipline that makes AI agents safe to deploy. Without it, you are not deploying an agent. You are deploying an accelerant into an uncontrolled environment and calling it a strategy.</p>

<h2 id="documentation-is-not-an-ai-problem-it-never-was">Documentation Is Not an AI Problem. It Never Was.</h2>

<p>Here is the part that should land hardest for any executive thinking about agent readiness. The work required to get an AI agent performing reliably, writing clear expectations, defining scope, capturing institutional knowledge, building a knowledge base it can draw from, is the exact same work required to onboard a new employee well. Same investment. Same discipline. Same quality bar.</p>

<p>If your team does not have solid onboarding documentation, clear role expectations, or a knowledge base that actually reflects how the work gets done, an agent will not paper over that gap. It will expose it at scale. The organizations discovering this the hard way right now built their agent deployments on the assumption that the technology would fill in what management had not. It does not work that way.</p>

<p>The documentation your staff needs to do their jobs well is the same documentation your agents need to operate reliably. That is not a coincidence. It is the point. If the thought of writing it for an AI finally creates the organizational will to build it, then use that motivation. But build it for your people first and your agents will inherit something worth having.</p>

<h2 id="what-good-looks-like-before-you-deploy-anything">What Good Looks Like Before You Deploy Anything</h2>

<p>Work management has a few non-negotiable components. Work must be defined precisely enough to be assigned. Progress must be visible to the people accountable for it, without requiring a status meeting to find out. Quality must be checked against a known standard before output moves on. Throughput, the rate at which work actually completes, must be measured rather than assumed. And the cost of that throughput must be traceable to outcomes the organization cares about.</p>

<p>If those components exist, deploying an AI agent is an operational decision with a measurable expected return. If they do not exist, deploying an agent is an act of optimism. Organizations that have confused the two are already encountering the consequences.</p>

<h2 id="leadership-is-the-dependency">Leadership Is the Dependency</h2>

<p>There is a version of the AI conversation happening in boardrooms right now that treats the technology as the subject and the organization as the beneficiary. That framing is backwards and expensive. The organization is the subject. Leadership is the active ingredient. Technology is the instrument.</p>

<p>No agent decides what matters. No model sets accountability. No workflow tool creates a culture where people tell the truth about where work actually stands. Those are leadership functions. They always were. The executives who will get real value from AI agents are the ones who recognized that and did the management work first. The ones who skipped it will spend the next few years explaining to their boards why the investment did not perform.</p>

<p>You cannot AI your way out of poor leadership. That is not a caution against AI. It is the precondition for using it well.</p>]]></content><author><name></name></author><category term="Leadership" /><category term="AI" /><summary type="html"><![CDATA[Before you wire an AI agent into your operations, answer a simpler question: how do you run work today? Not aspirationally. Not as it appears in a slide deck. How does a unit of work actually get assigned, tracked, reviewed, communicated, and closed? If that question produces hesitation, that hesitation is the real problem.]]></summary></entry><entry><title type="html">The Marty McFly Problem</title><link href="https://peter.zaffina.net/blog/Marty-McFly-Problem/" rel="alternate" type="text/html" title="The Marty McFly Problem" /><published>2026-03-24T13:00:00+00:00</published><updated>2026-03-24T13:00:00+00:00</updated><id>https://peter.zaffina.net/blog/Marty-McFly-Problem</id><content type="html" xml:base="https://peter.zaffina.net/blog/Marty-McFly-Problem/"><![CDATA[<p>There is a scene in <em>Back to the Future Part III</em> where Marty McFly picks up a gun at a shooting gallery and fires with startling accuracy. He has never drawn on a man. He has never trained under pressure. But he has logged serious hours on a video game called “Wild Gunman,” and his hands know what to do. The crowd is impressed. So is Marty.</p>

<p>It is a great scene. It is also a terrible talent development model.</p>

<p>I keep thinking about it when I watch organizations hand AI tools to early-career professionals and call it a training program. The logic follows the same shape: exposure to a simulation produces capability; the simulation is increasingly realistic; therefore the capability is real. But the scene plays for laughs precisely because everyone watching knows the gap between arcade reflexes and the genuine article. Marty eventually has to face down a real gunfighter, and the stakes are not pixels.</p>

<h2 id="the-confusion-between-output-and-competence">The Confusion Between Output and Competence</h2>

<p>The problem is not that AI tools are unhelpful. They are genuinely useful, and any organization that refuses to integrate them is making a different kind of mistake. The problem is the conflation of assisted output with developed skill. When a junior analyst uses a language model to structure an argument, the deliverable looks like the work of someone who can structure arguments. When they use it to summarize a dataset, the output looks like the work of someone who understands data. The organization sees the output, evaluates it positively, and draws the wrong conclusion about what capability now exists in the room.</p>

<p>This is not a new failure mode. It rhymes with every generation of tooling that promised to close the gap between novice and expert. Spreadsheets did not produce financial analysts. Search engines did not produce researchers. Both tools made certain tasks faster for people who already understood the underlying domain. For people who did not, they produced a convincing imitation of understanding, durable only until the moment the tool failed or the question got hard.</p>

<p>What AI tools have done is raise the ceiling on that imitation considerably. The output is more polished. The gap is harder to see from the outside, and sometimes from the inside. A junior employee who genuinely cannot assess the quality of a model-generated analysis has no reliable way to know when they are holding something sound versus something that reads well and is wrong. That is a liability the organization is carrying, usually without naming it.</p>

<h2 id="what-foundation-actually-means">What Foundation Actually Means</h2>

<p>Foundation is not familiarity with concepts. It is the accumulation of decisions made under conditions where the feedback was real and the consequences were yours. A finance professional who has rebuilt a broken model at midnight before a board meeting understands something about model construction that no simulation teaches. An operations leader who has managed a vendor relationship through a failure knows how to read the early signals that a relationship is deteriorating. That knowledge is not in the documentation. It lives in pattern recognition built from experience that had stakes.</p>

<p>Early career years are the window when that foundation gets laid. It is the period when the organization should be deliberately engineering exposure to hard problems, to failure with limited blast radius, to feedback loops that are honest rather than diplomatic. It is when junior professionals should be doing things badly enough, often enough, to develop genuine judgment about what good looks like. That process is slow and uncomfortable and irreplaceable.</p>

<p>When AI tools absorb the friction of that period, they do not accelerate development. They defer it. The organization ends up with a cohort that has produced a lot of polished work and developed very little of the judgment required to produce polished work without assistance. The risk surfaces later, in the moments that require independent reasoning under pressure, which is precisely when leadership needs the bench depth to be real.</p>

<h2 id="the-leadership-decision">The Leadership Decision</h2>

<p>This is not an argument for withholding useful tools. It is an argument for being precise about what the organization is actually building. There are two different investments that can be made with early career talent, and they produce different outcomes over a five-year horizon.</p>

<p>The first invests in throughput. Junior professionals are equipped with every available tool, output volume rises, and short-term delivery capacity improves. The organization looks more productive. The hidden cost is that the talent pipeline is being filled with people who are dependent on instruments they cannot critically evaluate, in roles that will eventually require them to exercise judgment the instruments cannot provide.</p>

<p>The second invests in capability. Tools are available, but the organization also engineers deliberate exposure to problems that require unassisted thinking, to feedback that is specific and honest, and to failure early enough that it builds rather than breaks. Throughput in the short term is modestly lower. The pipeline in year four is substantially stronger.</p>

<p>Most organizations are defaulting to the first without deciding to. The tools are adopted because the output looks better and the productivity numbers are real. Nobody is making an explicit choice to defer capability development. It is happening as a consequence of optimizing for something visible over something that matters more.</p>

<p>The question worth bringing to a leadership team is not whether to use AI tools. That decision is largely made. The question is whether the organization has a deliberate point of view on what it is building in its people, and whether the current environment is actually producing that. Marty got lucky. He had Doc Brown, a time machine, and a screenplay. The organizations betting that tool fluency will mature into genuine judgment are carrying a different kind of risk, and the reckoning does not come with a third act.</p>]]></content><author><name></name></author><category term="AI" /><category term="Workforce" /><summary type="html"><![CDATA[There is a scene in Back to the Future Part III where Marty McFly picks up a gun at a shooting gallery and fires with startling accuracy. He has never drawn on a man. He has never trained under pressure. But he has logged serious hours on a video game called “Wild Gunman,” and his hands know what to do. The crowd is impressed. So is Marty.]]></summary></entry><entry><title type="html">The AI Project Will Lose to the Data Project</title><link href="https://peter.zaffina.net/blog/Data-Projects/" rel="alternate" type="text/html" title="The AI Project Will Lose to the Data Project" /><published>2026-03-22T13:00:00+00:00</published><updated>2026-03-22T13:00:00+00:00</updated><id>https://peter.zaffina.net/blog/Data-Projects</id><content type="html" xml:base="https://peter.zaffina.net/blog/Data-Projects/"><![CDATA[<p>Over the next several years, every executive team will face the same budget decision in some form: the high-visibility AI initiative, or the data governance work nobody wants to present at the board meeting. Most will choose the AI project. That is the wrong call, and the organizations that make it will find out why around month eighteen.</p>

<h2 id="the-choice-looks-easy-it-isnt">The Choice Looks Easy. It Isn’t.</h2>

<p>The AI project has a narrative. It has momentum. It has a vendor ready to present at the next board meeting, with a live demo and a slide about competitive differentiation. The data governance work has a project plan, a remediation backlog, and no great story to tell. On paper it is not a close contest. In practice, the organization that funds the demo and defers the foundation is making a sequencing error that will cost significantly more to correct than it would have cost to avoid.</p>

<h2 id="what-bad-data-does-to-ai">What Bad Data Does to AI</h2>

<p>AI running on unresolved data debt does not fail quietly. It produces confident wrong answers at scale, faster than any legacy system could. It surfaces inconsistencies that have been buried in spreadsheets and shadow systems for years, except now they are embedded in a decision support tool that an executive is relying on. The failure mode is not a missed report or a slow dashboard. It is eroded institutional trust in a technology that leadership just spent significant political capital to champion.</p>

<p>The organizations that treat data governance as the thing you do after AI is deployed are making a category error. They are confusing a demo with a foundation. A model trained or operated on bad data does not become more accurate as adoption grows. It becomes more confidently wrong at greater scale. That is not an AI problem. It is a data problem that was visible before the project started, and the decision to proceed anyway was a leadership choice, not a technical one.</p>

<h2 id="why-governance-keeps-losing-the-budget-fight">Why Governance Keeps Losing the Budget Fight</h2>

<p>The governance argument loses in the boardroom because it is hard to make compelling. There is no demo. There is no before-and-after screenshot. The value of a clean data foundation is almost entirely expressed in the future tense: things that will not break, decisions that will be more reliable, AI that will actually compound. Against a vendor presenting an AI proof of concept with live output, that argument tends not to land.</p>

<p>There is also a structural problem. The people who know where the data debt lives are rarely the people presenting to the board. The data engineering team, the data stewards, the architects who have been managing the accumulation of inconsistent schemas and undocumented pipelines for years know exactly how precarious the foundation is. But their work does not produce narratives that travel well up the chain. The AI vendor’s work does.</p>

<p>The result is that organizations systematically underfund the work that determines whether AI investments pay off, in favor of investments that are easier to explain and faster to show.</p>

<h2 id="what-the-foundation-actually-buys">What the Foundation Actually Buys</h2>

<p>Data governance is not a compliance exercise and it is not a risk mitigation project, though it does both of those things. In the context of AI, it is value infrastructure. It is the difference between an AI system that produces reliable outputs you can act on and one that produces outputs you have to verify before acting on, which largely defeats the purpose.</p>

<p>Organizations that invest in data ownership, lineage, quality standards, and access controls before deploying AI end up with something the fast-movers do not have: a foundation that actually compounds. Each subsequent AI use case builds on infrastructure that has already been validated. The second project is faster than the first. The third is faster still. The organizations that skipped the foundation work are rebuilding it in parallel with every project, which is expensive, slow, and politically exhausting.</p>

<p>The compounding dynamic is real and it is consequential. It is also invisible until the gap between the two approaches becomes impossible to ignore, which typically happens around the time the fast-mover is trying to explain to its board why the AI initiative needs another remediation phase.</p>

<h2 id="this-is-not-an-argument-against-ai">This Is Not an Argument Against AI</h2>

<p>The case for doing governance work first is not a case against AI ambition. It is a case for sequencing correctly. Organizations that build the foundation now will have significantly more to work with when they deploy AI at scale. Their models will have access to cleaner, better-governed data. Their outputs will be more reliable. Their teams will spend less time chasing data quality issues and more time building on top of a system that actually works.</p>

<p>The organizations treating governance as the alternative to AI are misreading the situation. Governance is the precondition for AI that delivers durable value rather than a high-profile proof of concept that quietly stalls out. The choice is not between moving fast and moving carefully. It is between building something that lasts and building something that has to be rebuilt.</p>

<p>The AI project will lose to the data project. Not in the budget meeting. Not in the board presentation. But in the outcome, and that is the only measure that matters.</p>]]></content><author><name></name></author><category term="AI" /><category term="Project Management" /><category term="Strategy" /><summary type="html"><![CDATA[Over the next several years, every executive team will face the same budget decision in some form: the high-visibility AI initiative, or the data governance work nobody wants to present at the board meeting. Most will choose the AI project. That is the wrong call, and the organizations that make it will find out why around month eighteen.]]></summary></entry><entry><title type="html">The Prodigal Son’s Brother Had My Job Title</title><link href="https://peter.zaffina.net/blog/New-Outlook/" rel="alternate" type="text/html" title="The Prodigal Son’s Brother Had My Job Title" /><published>2026-03-22T00:00:00+00:00</published><updated>2026-03-22T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/New-Outlook</id><content type="html" xml:base="https://peter.zaffina.net/blog/New-Outlook/"><![CDATA[<p>There’s a moment near the end of <em>Ferris Bueller’s Day Off</em> that I’ve thought about more than any leadership book I’ve read in the last year.</p>

<p>Jeanie Bueller is sitting in a police station, furious. Her brother has spent the day skipping school, riding in parades, eating at four-star restaurants, and somehow the universe keeps covering for him. She has spent the same day getting arrested. She unloads on a stranger in the waiting room, a quiet kid with a leather jacket and exactly zero interest in her problems.</p>

<p>He listens. Then he says it:</p>

<p><em>“Your problem is you. You oughta spend a little more time dealing with yourself and a little less time worrying about what your brother does.”</em></p>

<p>She stares at him. Partly because he’s bold. Partly because he’s right.</p>

<p>I’ve been Jeanie Bueller in three industries. Army. Global manufacturing. Tribal government. Different uniforms, different org charts, same interior monologue: <em>why does that person keep getting away with it? Why does the system reward them? Why am I grinding while they float?</em></p>

<p>I called it the Prodigal Son’s Brother Syndrome. Because the Bible names it, too. The elder son in that parable works faithfully, follows every rule, never once gets a party thrown in his honor. Then his reckless brother limps home and their father throws a feast. The elder son loses it. He won’t even go inside. He stands in the yard, arms crossed, furious at the injustice of it.</p>

<p>I understood that man completely. I <em>was</em> that man.</p>

<p>And I was wrong.</p>

<h2 id="when-the-system-breaks-so-do-you">When the System Breaks, So Do You</h2>

<p>The last five months brought a sequence of events that compounded into something I wasn’t ready for.</p>

<p>A leader I genuinely respected moved on. That kind of transition is its own grief process, and I didn’t give myself permission to name it as such. Before I’d processed it, a new direction was set, and with it came the quiet mourning of a path I’d been working toward.</p>

<p>Then the resource picture got worse. My team was stretched past any reasonable limit. Not a little over capacity. Structurally, systemically constrained in ways that no amount of effort could fix. A friend at work asked me a simple question: <em>Why are you over-functioning to hold this together?</em></p>

<p>I didn’t have a clean answer.</p>

<p>The decisions kept coming. Every choice required justification. Every constraint had to be explained and re-explained. The buffer was gone. There was no slack left in the system. My team was going to have to obey the laws of physics, and for a long time I kept acting like willpower and late nights could suspend them.</p>

<p>Eventually I lost my cool. Not publicly. Not in a way that became a scene. But internally, I crossed a line that I recognized immediately. And I knew that if I didn’t fix something fundamental, it was going to go badly wrong.</p>

<h2 id="the-student-the-teacher-and-a-leather-jacket">The Student, the Teacher, and a Leather Jacket</h2>

<p>I went back to Scripture. Not as a reflex, but as a deliberate search. I was looking for specific guidance in specific areas where I was failing, and I found it.</p>

<p>Proverbs 4:23: <em>“Above all else, guard your heart, for everything you do flows from it.”</em></p>

<p>Philippians 4:11: <em>“I have learned, in whatsoever state I am, therewith to be content.”</em> Not born into contentment. Learned. There’s a whole world in that distinction.</p>

<p>Romans 12:19 on justice not being mine to administer. Galatians 6:4 on comparing your work only to your own prior work.</p>

<p>I had read these before. I had nodded at them in church. They had never landed.</p>

<p>Then I started reading Marcus Aurelius. The <em>Meditations</em> are the private journal of a Roman emperor, a man with more power than almost anyone in history, writing to himself about why he should not be controlled by his emotions. He wrote: <em>“It isn’t the thing that happens that disturbs you. It’s your judgment about the thing.”</em></p>

<p>That sentence unlocked something.</p>

<p>I don’t fully understand why the Stoic framing made the Scripture accessible when it hadn’t been before. Maybe it was the emotional distance of an ancient Roman emperor, someone with no stake in my situation. Maybe it was, as someone wiser than me once put it, that when the student is ready, the teacher appears. Whatever the mechanism, the two traditions reinforced each other in a way that finally got through.</p>

<h2 id="anxiety-lives-in-the-future-anger-lives-in-the-past-neither-is-now">Anxiety Lives in the Future. Anger Lives in the Past. Neither Is Now.</h2>

<p>Another friend offered me what turned out to be a diagnostic tool I’ve used every day since.</p>

<p>He said: if you’re anxious, the event hasn’t happened yet and you can’t do anything about it. If you’re angry, the event already happened and you can’t change it. Either way, the only place where you can actually do anything is now.</p>

<p>I started noticing which emotion I was in. Anxiety about a decision not yet made. Anger about an outcome already locked in. In both cases, I was paying an emotional tax on something I couldn’t touch.</p>

<p>A mentor of mine has modeled this for years without ever naming it. He maintains an even keel in situations that would send most people into orbit. I always admired it. I finally understood it.</p>

<h2 id="this-is-not-apathy-this-is-better-than-apathy">This Is Not Apathy. This Is Better Than Apathy.</h2>

<p>I want to be clear about what changed, because I think it gets misread.</p>

<p>I am not less passionate. I am not checked out. I have not stopped caring about my team, my organization, or the outcomes I’m responsible for. I still feel the weight of the work.</p>

<p>What I let go of was the anguish. The scorekeeping. The elder son standing in the yard, refusing to go to the party because the accounting didn’t add up the way he thought it should.</p>

<p>The elder son’s math wasn’t wrong. He had been faithful. His brother had not. The father’s response wasn’t fair by any conventional measure. But the elder son’s insistence on fairness cost him the feast. He was right and miserable. He could have been fed.</p>

<p>Jeanie Bueller was also right. Ferris did ditch. The universe did cover for him. None of that was fair. But her rightness was making her life worse, not better. The kid in the leather jacket saw it. She eventually saw it too.</p>

<p>What I’ve found is not detachment. It’s something closer to what Paul described as contentment. A settled ability to do the work in front of you without requiring the system to validate your effort, without waiting for the accounting to balance before you allow yourself peace.</p>

<p>My team can only do what the laws of physics allow. I’ve stopped fighting that. We do excellent work within real constraints, and I’ve let go of the anxiety that came from pretending otherwise.</p>

<p>The decisions still get made. The pressure is still real. I still advocate hard for my people.</p>

<p>But I’m not Jeanie in the police station anymore.</p>

<h2 id="the-question-worth-sitting-with">The Question Worth Sitting With</h2>

<p>If you’re a leader and something in this landed, I’d ask you to sit with one question before you move on.</p>

<p>Where are you the elder son right now?</p>

<p>Not whether someone else is being rewarded unfairly. Not whether the system is broken. It probably is. The question is whether your insistence on that accounting is costing you something.</p>

<p>You can be right about the injustice and still choose to go inside to the feast.</p>

<p>You can care deeply about your work and still put down the scoreboard.</p>

<p>You can be passionate without being controlled.</p>

<p>That shift, for me, has been one of the most significant things to happen in a long career across very different worlds. It didn’t come from a training program. It came from a combination of Scripture I finally heard, a philosopher who died in 180 AD, a couple of honest friends, and one unforgettable scene in a John Hughes movie.</p>

<p>The student was ready.</p>

<p>The teachers showed up.</p>]]></content><author><name></name></author><category term="Personal" /><category term="Leadership" /><summary type="html"><![CDATA[There’s a moment near the end of Ferris Bueller’s Day Off that I’ve thought about more than any leadership book I’ve read in the last year.]]></summary></entry><entry><title type="html">The Helmet Rule Nobody Wants to Enforce</title><link href="https://peter.zaffina.net/blog/Gordie-Howe/" rel="alternate" type="text/html" title="The Helmet Rule Nobody Wants to Enforce" /><published>2026-03-19T00:00:00+00:00</published><updated>2026-03-19T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/Gordie-Howe</id><content type="html" xml:base="https://peter.zaffina.net/blog/Gordie-Howe/"><![CDATA[<p>Gordie Howe played the last years of his NHL career without a helmet. The league had mandated them, but a grandfather clause let veterans who had played without one continue to do so. Nobody was stripping that choice from Mr. Hockey. He had earned the right to decide how he protected his own head, and the league respected it.</p>

<p>I grew up in Detroit. That image stayed with me. And I keep coming back to it as organizations wrestle with one of the more honest questions in AI adoption: when do you let people opt out, and when does the organization’s need for a performance floor override individual preference?</p>

<h2 id="the-autonomy-instinct-is-correct-up-to-a-point">The Autonomy Instinct Is Correct, Up to a Point</h2>

<p>The reflex to protect worker choice around AI is not just politically convenient. It reflects something real about how adoption actually works. Forcing tools on people who distrust them produces compliance theater, not capability. You get surface-level usage, no genuine integration, and resentment that poisons the well for everything that follows. The organizations that mandate AI use without building trust first are not accelerating adoption. They are building a case study in why it fails.</p>

<p>Autonomy also reflects a legitimate professional concern. When someone has spent twenty years developing judgment in a domain, and a tool arrives that claims to do that job in seconds, skepticism is not ignorance. It is a reasonable response to an unproven claim. The expert who pushes back on AI is not always defending ego. Sometimes they are defending accuracy, and they are right to make the organization prove it before capitulating.</p>

<p>But the grandfather clause only worked for Gordie Howe because Gordie Howe was Gordie Howe. The rest of the league needed the helmet. Individual exception is not organizational strategy.</p>

<h2 id="the-reskilling-problem-is-real-and-it-is-early">The Reskilling Problem Is Real, and It Is Early</h2>

<p>Here is where the honest answer gets uncomfortable. Most organizations are having the autonomy conversation before they have solved a more foundational problem: they do not yet know what reskilling for an AI-augmented workforce actually looks like. The tools are evolving faster than training curricula. The job categories most affected are not yet fully defined. The skill gap between workers who use AI fluently and workers who do not is measurable today, but the ceiling on that gap is still unknown.</p>

<p>That uncertainty cuts in two directions. It argues against forcing workers into rigid AI workflows that may be obsolete in eighteen months. It also argues against letting individual preference determine capability investment, because the organization cannot afford to find out two years from now that half its workforce opted out of the future.</p>

<p>The reskilling question is not primarily a training question. It is a strategy question. What does this organization need its people to be able to do, and over what timeframe? Most leadership teams have not answered that with enough specificity to build a workforce plan behind it. They are making policy decisions without a map.</p>

<h2 id="productivity-is-not-optional-but-the-floor-is-negotiable">Productivity Is Not Optional, But the Floor Is Negotiable</h2>

<p>The pressure organizations feel to show AI productivity gains is real. Boards are asking. Competitors are claiming results. The temptation is to set a high floor fast, mandate adoption broadly, and report the numbers. That approach tends to produce exactly the compliance theater described above, plus a layer of workforce anxiety that is genuinely difficult to undo.</p>

<p>A more defensible position is to define the minimum viable capability the organization needs across roles, and build toward that deliberately rather than universally. Not every role needs the same level of AI fluency. Not every workflow benefits equally from augmentation. The organizations that will win this are not the ones that adopted fastest. They are the ones that adopted with enough precision to build real capability rather than surface coverage.</p>

<p>That still requires setting a floor. Autonomy cannot mean indefinite deferral. At some point the organization has to say: this is what working here requires, and we are committed to helping you get there. That is not coercion. That is a functioning employment relationship with a point of view about the future.</p>

<h2 id="the-conversation-most-organizations-are-avoiding">The Conversation Most Organizations Are Avoiding</h2>

<p>The reason the helmet analogy holds is not the helmet itself. It is the grandfather clause, and what it says about how institutions manage transition. The NHL was not wrong to let Howe make his own choice. But they did not extend that option indefinitely or universally. It was a bounded accommodation during a transition period, not a permanent policy.</p>

<p>That is the frame most organizations need and few have adopted. Not a mandate, not a free-for-all, but a defined transition window with real support, a clear destination, and an honest conversation about what comes after it closes. Workers deserve to know what the organization actually needs from them. Leadership owes them that clarity before it owes them a policy.</p>

<p>The organizations getting this right are not the ones moving fastest. They are the ones who had the harder internal conversation first, defined what they were actually building toward, and then brought their workforce along with enough lead time to make it real. That conversation is harder than deploying a tool. It is also the only one that matters.</p>]]></content><author><name></name></author><category term="AI" /><category term="Humane AI" /><category term="Responsible AI" /><summary type="html"><![CDATA[Gordie Howe played the last years of his NHL career without a helmet. The league had mandated them, but a grandfather clause let veterans who had played without one continue to do so. Nobody was stripping that choice from Mr. Hockey. He had earned the right to decide how he protected his own head, and the league respected it.]]></summary></entry><entry><title type="html">Your AI Program Is a Mirror</title><link href="https://peter.zaffina.net/blog/AI-Project-Mirror/" rel="alternate" type="text/html" title="Your AI Program Is a Mirror" /><published>2026-03-18T00:00:00+00:00</published><updated>2026-03-18T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/AI-Project-Mirror</id><content type="html" xml:base="https://peter.zaffina.net/blog/AI-Project-Mirror/"><![CDATA[<p>If your AI projects are struggling, the instinct is to look at the technology first: the model selection, the vendor relationship, 
the data pipeline, the talent gap. Those are real problems, but they are not the root problem.</p>

<h2 id="what-the-failures-are-actually-telling-you">What the failures are actually telling you</h2>

<p>AI projects fail for the same reasons complex projects have always failed. They inherit the delivery system they are launched into, 
and if that system was already broken, AI makes the damage more visible and more expensive.</p>

<p>This is not a comfortable diagnosis for organizations that have spent the last two years treating AI as a special category of 
initiative. It is not special. It is a complex, cross-functional program that requires clear ownership, disciplined scope management, 
meaningful stakeholder alignment, and a governance structure that can absorb ambiguity without collapsing into delay. If your 
organization could not reliably deliver those things before, it cannot deliver them now. The technology does not fix the organizational 
substrate it runs on.</p>

<h2 id="the-mirror-is-honest">The mirror is honest</h2>

<p>Every delivery failure carries a signature. Scope that expands without consequence. Sponsors who are nominally accountable 
but operationally absent. Steering committees that produce documentation rather than decisions. Dependencies that surface late 
because no one was tracking them. These patterns do not originate in AI programs. They exist in your ERP rollouts, your digital 
transformation initiatives, your platform migrations. AI simply makes them harder to hide because the iteration cycles are faster, 
the ambiguity is higher, and the organizational expectations are more exposed.</p>

<p>What I see consistently is leadership that treats AI as categorically different from prior technology programs, which leads 
them to suspend the standard of rigor they would apply elsewhere. Governance structures that would be considered inadequate 
for a significant infrastructure project get approved for AI initiatives because the technology feels new and the rules feel
negotiable. They are not. The same discipline that produces reliable delivery in other domains is the discipline AI requires.</p>

<h2 id="what-mature-delivery-systems-already-know">What mature delivery systems already know</h2>

<p>The organizations I see realizing genuine AI returns are not necessarily the ones that moved fastest or spent most. They are the 
ones that arrived at AI with delivery systems already built to resist distraction. That capability did not develop in response 
to AI. It was there before, refined through years of making hard prioritization calls, killing initiatives that could not 
demonstrate compounding value, and holding the line against whoever walked in with the most exciting new thing.</p>

<p>That discipline is structural, not cultural. It lives in how investment decisions get made, how scope changes get evaluated, 
how benefits realization is tracked over time. When AI arrived, those organizations did not need to invent new governance. 
They applied what they had, and it worked, because the fundamentals were already sound. The bright and shiny problem was already 
solved. AI was just another program that had to prove its value through the same filter as everything else.</p>

<h2 id="the-root-question-is-not-about-ai">The root question is not about AI</h2>

<p>Before asking whether your organization is ready for AI, the more useful question is whether your program management function 
is ready for complexity. Not AI complexity specifically, but the ordinary complexity of initiatives that cross organizational
boundaries, require iterative decision-making, and produce value in ways that are difficult to measure at the outset. If the 
honest answer is no, or not reliably, then AI investment will continue to produce disappointing returns regardless of what you 
spend on infrastructure or tooling.</p>

<p>This is where the conversation needs to go at the board and executive level. Not “are we investing enough in AI?” 
but “do we have the delivery capability to convert that investment into outcomes?” Those are different questions, 
and conflating them produces the wrong remediation. Organizations that answer the first question while ignoring the 
second will keep funding programs that fail for reasons they misattribute to technology.</p>

<h2 id="what-sound-leadership-looks-like-here">What sound leadership looks like here</h2>

<p>Fixing the delivery system is not glamorous work. It does not generate press releases. But it is the precondition for AI value, 
not an alternative to it. That means being honest about where program governance is weak, where accountability is diffuse, and 
where organizational design is creating friction that no amount of technical capability can overcome.</p>

<p>The organizations that will build durable AI capability are not necessarily the ones that moved first. They are the ones that
treated AI as a forcing function to strengthen their delivery infrastructure, held that work to the same standard of rigor they
apply to any consequential program, and refused to let the novelty of the technology become an excuse for the absence of discipline.</p>

<p>Your AI program is showing you what your delivery system is. The question is whether leadership is willing to look.</p>]]></content><author><name></name></author><category term="AI" /><category term="Project Management" /><summary type="html"><![CDATA[If your AI projects are struggling, the instinct is to look at the technology first: the model selection, the vendor relationship, the data pipeline, the talent gap. Those are real problems, but they are not the root problem.]]></summary></entry><entry><title type="html">You’re Doing AI the Same Way You Did Cloud</title><link href="https://peter.zaffina.net/blog/Cloud-and-AI-Pattern/" rel="alternate" type="text/html" title="You’re Doing AI the Same Way You Did Cloud" /><published>2026-03-14T00:00:00+00:00</published><updated>2026-03-14T00:00:00+00:00</updated><id>https://peter.zaffina.net/blog/Cloud-and-AI-Pattern</id><content type="html" xml:base="https://peter.zaffina.net/blog/Cloud-and-AI-Pattern/"><![CDATA[<p>What the cloud era actually taught us about building AI capability, and why most organizations are skipping it again.</p>

<h2 id="the-lesson-everyone-skipped-the-first-time">The Lesson Everyone Skipped the First Time</h2>

<p>The current sprint toward AI adoption is following a script organizations have run before, and most of them are making the same mistake they made the first time. In the early years of cloud and DevOps, the pressure was to move fast and show wins. Get into AWS. Ship something. Justify the budget. What got lost in that rush was the more important question: what actually made the hyperscalers fast? The answer was not the geography of the compute. It was the tooling, the developer experience, the operational discipline, and the internal platforms they built on top of commodity infrastructure. Organizations that chased the cloud without building that foundation ended up with a distributed mess that was more expensive and harder to govern than what they replaced.</p>

<p>AI adoption in 2025 looks nearly identical. The pressure is the same. The reasoning is the same. And the foundational work is being skipped for the same reasons.</p>

<h2 id="what-the-cloud-actually-sold-you">What the Cloud Actually Sold You</h2>

<p>When enterprises moved workloads to AWS or Azure a decade ago, the speed gains were real, but they were not coming from the fact that the servers were in someone else’s data center. They came from abstraction. Infrastructure as code. CI/CD pipelines. Managed services that eliminated toil. Observability built into the platform. A developer experience that made provisioning, deploying, and monitoring something a single team could own without filing a ticket and waiting three weeks. The hyperscalers had spent years building internal platforms, and cloud customers were essentially renting access to that operational maturity.</p>

<p>Most organizations did not recognize this at the time. They moved to the cloud to go faster and cut costs, checked both boxes in year one, and declared victory. Then the bills arrived. Then the sprawl arrived. Then the security incidents arrived. What they had actually done was outsource the hard work of building an engineering platform rather than build it themselves. Speed was real but it came with a rental agreement on their own future: vendor dependency, data residency risk, and unit economics that deteriorated as scale increased.</p>

<p>The organizations that came out of that era well were the ones that used cloud adoption as a forcing function to build real internal platform capability. They took the DevOps patterns, the infrastructure automation, the deployment discipline, and they made those things native to how their engineering organization operated, regardless of where the workload ran. For them, on-premises and cloud became genuinely interchangeable from a developer’s perspective. The platform was the asset. The infrastructure was a detail.</p>

<h2 id="the-same-mistake-faster">The Same Mistake, Faster</h2>

<p>AI is moving faster than cloud did, which means the window for making this mistake is shorter and the consequences of making it are larger. The current pattern is organizations standing up AI tooling, often through approved SaaS vendors, and calling it an AI strategy. Procurement moves. A policy gets written. A few pilots run. The board gets a slide deck. What does not happen is the foundational work: data infrastructure that can actually support AI workloads, governance models that are operational rather than aspirational, security controls that treat AI outputs as first-class risk surface, and the internal platform capability that would let teams build and deploy AI applications with the same safety and efficiency they expect from modern software delivery.</p>

<p>The result is the same as the cloud era. Fast wins, real enough to sustain momentum and justify the spend, sitting on top of a foundation that will not hold weight when the organization tries to scale. The AI equivalents of cloud sprawl and shadow IT are already appearing. Ungoverned model usage. Sensitive data moving through third-party APIs under terms that have not been reviewed by anyone who understands the exposure. Teams building on top of vendor APIs with no portability and no negotiating position. Uninformed exposure with a policy veneer over it.</p>

<h2 id="the-foundational-work-is-the-strategy">The Foundational Work Is the Strategy</h2>

<p>The organizations that will have genuine AI capability in five years are the ones that treat the current moment as an opportunity to build the platform, not just run the pilots. That means investing in data infrastructure that makes clean, governed, accessible data a solved problem for engineering teams. It means building internal AI development patterns and tooling that give developers on-premises the same leverage and experience they get from cloud-based AI services. It means treating security and governance as platform features rather than compliance checkboxes applied after the fact.</p>

<p>This is not an argument against cloud AI services. It is an argument for understanding what you are buying when you use them and what you are giving up. For many organizations, the right posture is hybrid. Run workloads where it makes operational and economic sense to run them. But do not confuse access to someone else’s platform with having a platform of your own.</p>

<p>The developers do not actually care whether the infrastructure is on-premises or in a hyperscaler’s data center. They care whether it is fast, reliable, and easy to work with. The organizations that built that experience internally during the cloud era did not find themselves locked out of speed or capability. They found themselves less dependent, more governable, and better positioned to make rational infrastructure decisions based on actual requirements rather than accumulated switching costs. The same outcome is available now, for the same reasons, to the organizations willing to do the foundational work while everyone else is busy shipping demos.</p>]]></content><author><name></name></author><category term="AI Strategy" /><category term="Enterprise Technology" /><category term="Cloud Computing" /><category term="DevOps" /><category term="Digital Transformation" /><summary type="html"><![CDATA[What the cloud era actually taught us about building AI capability, and why most organizations are skipping it again.]]></summary></entry></feed>