top of page
Search

Five Predictions for 2026. The Year Architecture Calls Your Bluff


Why the winners will not be the ones who spent the most on AI, but the ones who rebuilt how decisions actually get made


By Michael Carroll


On paper, 2026 will look like a golden year. Capital will be abundant. Governments incent you to reshore. Your board wants growth and resilience at the same time. Your CIO has a roadmap with more colored swimlanes than an Olympic pool.

Yet if you are honest, you feel a different truth in your gut. You can no longer say with a straight face that your organization knows how to make things, build things, and grow. You know how to buy things. You know how to present things. You know how to talk about transformation.

But if you had to start from a blank site and build a plant, a supply chain, and an integrated business that could actually move at the speed of events, could you do it without recreating the same staircase of delay that is slowly killing you now.

2025 was the year of agent washing. Everyone discovered the word “agent” and glued it onto dashboards, chatbots, and tired workflows. Very few changed the architecture of who is allowed to act and how fast.

2026 is different. This is the year architecture calls your bluff. The year capital walks away from first generation AI and SaaS as overhead, and toward second generation, agentic architectures that can prove they remove burden, latency and self-deploy. The year you are forced to admit that efficiency is not another round of cost cuts. It is your capacity to navigate complexity at the speed required to make, build, and grow.

Here are five predictions about how that will play out. If you sit in the CEO, COO, or CFO chair, these are not trends to watch. They are decisions you either make deliberately or have made for you.

The Question That Sounds Too Obvious to Matter

Consider a question so obvious it almost feels insulting to ask. If you were offered a stock option with a fair strike price, asymmetric upside, and time on your side, would you accept it. Of course you would. Refusing it would be irrational. Would you then exercise it immediately, converting it at the strike price the moment it was granted, surrendering all future upside simply to say you acted. Of course you would not. Doing so would destroy the value of the option.

Now add the part most people forget to price. To hold that option intelligently, you pay a subscription. Not a fee to a broker, but the ongoing cost of staying connected to the information that tells you when exercising becomes rational. You keep the signal on. You keep your attention available. You keep your posture flexible. You do not pay that cost because it matters to you now. You pay it because the subscription is cheaper than regret, and because it buys you the right timing to exercise your option.

In the COO Council, this is the line that keeps returning. Enterprises pay the subscription either way. They pay it deliberately, in signal fidelity and decision readiness. Or they pay it later, in rework, escalation, and the costliest thing in any company, the moment you realize you acted too early, too late, or for the wrong reason.

This logic is not sophisticated. It is fundamental. It governs how rational people think about capital, risk, and opportunity every day. And yet, inside enterprises, this same logic collapses with remarkable consistency. Organizations convert insight into action the moment it appears. They treat knowledge as obligation. They behave as though every signal demands a response, every forecast requires a decision, every model output must be acted upon. In doing so, they exercise the future the moment it becomes visible, surrendering the very leverage that visibility was meant to create.

When stated plainly, this behavior does not sound foolish. It sounds reasonable. And that distinction matters, because reasonable behavior persists precisely because it feels justified at the time.

The Real Reason This Happens

It is tempting to conclude that organizations behave this way because leaders misunderstand the economics. They do not. Most CEOs understand optionality deeply. They live it in capital allocation, in acquisitions, in talent bets, in personal finance. They know, often instinctively, that waiting with discipline can create more value than acting early with confidence. So why does this wisdom evaporate inside the operating rhythm of the enterprise.

The answer is not logic. It is pressure.

Holding the future open feels safe in theory. In practice, it feels exposed. Inside an organization, unresolved decisions do not sit quietly. They invite questions. They attract scrutiny. They demand explanation. Waiting is rarely interpreted as strategic restraint. It is interpreted as indecision. Humans are wired to resolve uncertainty. Closure feels like progress. Action feels responsible. Ambiguity feels like risk. In simple systems, this instinct works. In complex systems, it becomes destructive.

Thomas Roemer’s culture work puts a name on the hidden mechanism. Culture is operating code. It decides whether waiting is treated as discipline or drift. It determines how escalation works, how dissent is handled, what gets rewarded, what gets tolerated, and whether leaders can keep the option open without turning the room into a referendum on their competence.

Why Action Wins, Even When Timing Loses

Action performs a social function. It signals control. It signals engagement. It signals leadership. Even when action is premature, it provides narrative cover. Based on what we knew at the time, the explanation goes, this was the right call. Waiting offers no such protection.

Waiting requires a leader to say the timing is not right yet, and then to pay the subscription required to hold that position through board questions, operating reviews, hallway conversations, and private doubt. The subscription is not only informational. It is political and emotional. It is the cost of keeping signal fidelity and credibility intact while you refuse to trade optionality for relief.

Veena Lakkundi’s M&A work makes this contradiction visible. In deal logic, leaders praise patience and optionality. In integration, they collapse it. Operating models collide, decision rights tangle, and the impulse to standardize fast destroys the very advantage the acquisition was meant to buy. Synergy does not come from spreadsheets. It comes from reconciling architectures, harmonizing decision cadence, and keeping the acquired business from being buried under the parent’s burden.

Why Hindsight Keeps WinningHindsight becomes seductive because it is emotionally efficient. After outcomes are known, narratives align. Causes can be named. Responsibility can be distributed. Even failure can be made coherent. The past, once fixed, is always explainable.

The future offers no such comfort. Before outcomes materialize, there are no clean stories. There is only judgment exercised without certainty. That judgment feels personal. It feels exposed. This is why enterprises default to hindsight even as they claim to value foresight. Not because leaders prefer to look backward, but because hindsight offers protection the future does not.

Ellis Allen Jones’ safety lens removes any remaining romance. Safety is where the subscription is non negotiable. You either stay connected to weak signals, near misses, drift, and the conditions that precede incidents, or you pay later in harm. Risk is not a poster. It is architecture, and safety is the proof case for whether an enterprise can convert learning into prevention at speed.

Why First-Generation Agents Don’t Fix It

What the market often calls first generation agents are not agents in the structural sense. They are AI systems that accelerate analysis, summarize information, and surface recommendations, while leaving inference and deployment exactly where they have always lived, with humans.

In these systems, AI may speak more fluently, but the organization still thinks, decides, and acts through people. Degrees of separation remain intact. Timing pressure remains human. Optionality still collapses early. They lower the cost of analysis, but they do not lower the subscription cost of timing, because humans still carry the burden of staying connected to meaning, thresholds, and readiness.

Jim Beilstein’s AI work sharpens the dividing line. AI is already everywhere. The differentiator is not whether you have AI. It is whether AI comes after the operating system is rebuilt. Being AI second means you start with decision rights, governance, data contracts, and runtime behavior. Then you bind intelligence to that structure so the system can hold signal fidelity, preserve optionality, and surface action only when thresholds are crossed. That is how you stop forcing humans to relitigate the world on every prompt, and start letting the enterprise behave with discipline at speed.

What Agents Must Become

Properly designed agents do not exist to make decisions for the enterprise. They exist to decide when the enterprise should decide. Second generation agents are defined not by autonomy rhetoric, but by the relocation of inference and deployment from human intermediaries into the system itself, collapsing the distance between intent and action.

They carry hypotheses forward in time. They monitor only the variables tied to intent. They surface evidence only when thresholds that matter are crossed. Waiting becomes a system property, not a personal gamble. The system pays the subscription on behalf of the organization. It keeps the signal on without exhausting the enterprise.

Prediction 1. Capital Walks Away from First Generation AI and SaaS as a TaxFor the last five years, almost any project with “AI” on the cover slide could find money. First generation AI sold you comfort. It told you there was meaning in one more predictive score, one more dashboard, one more “insight” about last quarter.

In 2026, that mood breaks. Politely at first. More ruthlessly in the end.

Boards and CFOs will start asking the only questions that matter about every AI and SaaS dollar. How many minutes of decision latency does this remove. How many human handoffs does this eliminate. How much cognitive burden does this take off the frontline.

If the project cannot answer in real numbers, not stories, the renewal gets cut. The roadmap gets moved “beyond the planning horizon”. The vendor gets one last polite email.

The pattern looks like this. Budgets move away from first generation AI that behaves like a rearview mirror. It consolidates and decorates the past, right at the moment your operations need to lean into the future. Budgets also move away from SaaS sprawl that adds icons to the desktop while stealing time, attention, and energy from the people closest to the work.

Money moves toward second generation, agentic architectures that hold permission, context, and memory. They live on the edge with your people, not up in the tower with your dashboards. They can show a simple causal chain from their existence to cycle time, quality, safety, and cash.

Here is the tell. First generation spend asks the enterprise to pay the subscription in humans. More interpretation, more governance, more meetings, more “alignment.” Second generation spend pays the subscription inside the system. It keeps signal fidelity, posture, and readiness alive without turning your leadership team into an inference engine.

You will know you are on the wrong side of this reallocation if your AI portfolio produces more slideware than step removal. You will know you are on the right side when your operators, schedulers, planners, and sales teams can point to one agent and say, without jargon, that thing took this work off my back, and nobody asked me to learn another system to get it.

This is not an ideology shift. It is a capital allocation shift. First generation AI comforts you that you own the latest tools. Second generation AI changes the architecture of work. In 2026, the money starts to know the difference.

Prediction 2. The Architecture of Burden Is Exposed. Menus Lose and Taskless Surfaces Begin to Win

For thirty years we treated menu driven interfaces as progress. ERP, CRM, MES, HCM, PLM, you know the alphabet. The story was always the same. Standardize the process. Put it in a system. Train people to follow the screens. Measure compliance.

You did all of that. Now look around. Your most experienced people spend their days acting as human middleware between systems that do not talk to each other. New hires take six months to learn the map of menus and passwords before they are useful. Managers prepare reports about reports. Everyone feels busy. Few feel effective.

In 2026, this gets named for what it is. An architecture of burden. Leaders begin to measure burden instead of celebrating it in glossy adoption reports. They start counting clicks per transaction, systems touched per role per shift, and minutes of every day spent navigating, reconciling, and correcting systems rather than doing the work the company actually exists to do.

The conclusion is not sentimental. We did not just digitize the work. We digitized the friction. We industrialized distraction and called it transformation.

The winners start to design the opposite. Not more apps. Not prettier menus. One simple surface, many systems. A taskless, agentic layer sits on top of your legacy stack. It uses the plumbing of ERP, MES, CMMS, LIMS, CRM, but it does not ask humans to live down there. It knows who the user is, understands the context of the asset, order, student, or customer, and preassembles the transaction, checks constraints, and proposes the right next action. The human confirms, corrects, or escalates. They do not hunt through seventeen screens to find a field someone forgot to map.

This is also where the subscription shows up with teeth. A burdened interface forces the enterprise to pay the subscription in human attention, every hour of every day, just to stay oriented. A taskless surface pays it in software, so readiness is maintained without turning the workforce into full time navigators.

I expect at least one major company to make this explicit in 2026. They will admit, in plain language, that they wrote off a massive ERP or SaaS front end and kept the records of truth while replacing the interface with an agentic layer that removed thousands of hours of navigation from the system.

Analysts will catch up. A new distinction appears in coverage. Software that adds burden versus software that absorbs it. The former starts trading at a discount. The latter gets treated as real leverage.

If you are still buying menus instead of agents by 2026, you are not modernizing. You are adding layers of rust to a machine that can already barely turn.

Prediction 3. Decision Latency and the Permission Staircase Become Board Discussions at the Best Companies

Every CEO talks about speed. Very few can draw how a decision actually moves through their organization.

In 2026, that gap starts to close. The companies that matter will know their permission staircase the way they once knew their org chart, and they will talk about it relentlessly in the boardroom.

Here is the staircase in plain terms. A signal appears at the edge. A sensor reading goes out of range, a quality check fails, a customer threatens to churn, a student’s mastery drifts, a supplier misses a window. Then come the approvals, reviews, and escalations required before someone can bind the company to an action. At the top sits the minimum level where authority and risk appetite are allowed to intersect.

Right now, most of that staircase is invisible. You feel it as delay. You experience it as “we are working the issue.” Your people experience it as frustration and learned helplessness.

In 2026, leading COOs and CFOs will put three artifacts in front of their boards, and they will talk through them in plain language. They will show decision latency curves for critical decision classes, from safety corrections to quality escapes to maintenance interventions to pricing moves to credit approvals to customer remediation. They will show the permission staircase map itself, how many steps each decision class climbs, and which steps exist because no one ever took a red pen to them. They will show where agents already own or share steps inside pre agreed guardrails, and where human escalation remains required for legal, ethical, or reputational reasons.

This is where second generation AI moves out of the lab and into the operating model. You will assign agents defined steps on the staircase where they can act without human permission, because you already negotiated the boundaries in daylight. The human still matters. Humans define the rules of engagement, set the guardrails, and own the questions where the organization must look itself in the mirror.

This is also where the role of the COO becomes unmistakable. The COO is the steward of decision architecture and operating cadence. The COO owns the subscription cost of readiness. Timing is not a personality trait. It is an operating system.

The prediction is simple. By the end of 2026, the companies that survive volatility best will be the ones that can answer three questions in under five minutes, in a board discussion, without hiding behind jargon. Show me, on one page, how a real safety or quality decision moves from sensor to action in this company. Show me exactly where we have chosen to keep humans in the loop and why. Show me one staircase where agents have already cut latency without increasing risk. If you cannot do that, you are not running an AI strategy. You are running a latency strategy and pretending it is about technology.

Prediction 4. Agents That Shape Outcomes Displace Fake Agents and Causal AI’s Contribution to Agency Becomes Better Understood

2025 flooded the world with fake agents. Icons and chat windows that promised to “assist” but did little more than wrap old workflows in new language. Bots that filled forms, moved tickets, and answered FAQs while carefully avoiding the one thing that defines a real agent. Direct, accountable influence on outcomes.

In 2026, that gap closes. Agents that shape outcomes displace fake agents that decorate process. And the reason is simple. Boards and operators finally understand that there is no real agency without causality.

Your plants, networks, and customer journeys already generate more data than any human team can keep up with. First generation AI used that data to describe what happened. Sometimes it even predicted what might happen next if the world conveniently behaved like the past. It lived on the bottom rungs of the causal ladder. Seeing patterns, not levers.

Causal AI climbs higher. It can answer what is actually driving this outcome, not just moving alongside it. It can answer what will happen if we intervene here instead of there, now instead of later. It can also tell you which actions you should never take, no matter what a correlation chart suggests.

Real agents in 2026 will be built on this kind of causal understanding. They will sit inside your operating model with clear mandates, clear guardrails, and clear accountability for moving specific outcomes, not just tickets.

An automated scientist is one of those agents. It is what causal, agentic AI looks like when it shows up for work. It watches for drift in the variables that matter. It proposes causal hypotheses instead of correlation gossip. It designs low risk experiments with predicted effect sizes and clear guardrails. It runs those tests within pre approved authority, because you gave it permission once, on the staircase, under defined rules. It updates guidance, parameters, or playbooks when the evidence is strong enough, and tells you what it changed and why.

This is where the subscription becomes practical. A real agent does not only compute. It maintains readiness. It keeps the signal on, monitors intent linked thresholds, and preserves optionality until exercising becomes rational. It reduces the cost of waiting by making waiting observable, defensible, and bounded.

Continuous improvement stops being a slow, sociable ritual. It becomes a live loop driven by causal test and learn. Engineers, teachers, planners, and supervisors gain something they have not had in years. They get out of the report making business and back into the cause and effect business. They spend their time asking better questions. What outcome matters most now. What risks are we truly willing to accept. What tradeoffs will we no longer tolerate.

The automated scientist grinds. Humans decide. Causal AI supplies the levers, the confidence intervals, and the boundaries where agents should never act.

That is what it means for an agent to shape outcomes instead of impersonating a clever help desk. Real agents stand on a causal model of your system and take bounded actions that move it. Fake agents stand on a workflow and move messages around.

In a world of reshoring and new capital projects, this difference will matter. The old model opens a new facility with thick binders of best practices and freezes them in place. The new model opens with a causal model and an automated scientist already embedded, learning the behavior of that specific plant, workforce, and supply base from day one, and adjusting in real time inside the guardrails you set.

By the end of 2026, serious boards will ask for one simple proof. Show us one loop in our business where AI does more than report and more than assist. Show us where a causal model plus an agent is actively discovering how to run this system better, under our rules, every day, and where you are willing to let it shape the outcome.

If your answer is a slide with labs and pilots, and a row of fake agents that move tickets but never touch the P&L, you are volunteering to fund someone else’s learning curve instead of your own.

Prediction 5. The Efficiency Reckoning. We Realize We Forgot How to Make, Build, and Grow

For years, “efficiency” has been code for “do the same with fewer people” or “hit the quarterly target no matter what it does to next year.” In 2026, a harsher definition takes hold. Efficiency becomes your ability to navigate complexity and decision latency at the speed required to make, build, and grow in a world that is reshoring, rearming, and reconfiguring at the same time.

Three forces collide. Reshoring and industrial policy are pulling production, energy, and critical supply chains closer to home. Capital abundance is flowing into transformation, but very little of that money touches the operating muscles that move steel, electrons, and goods. Mergers and acquisitions have been used as a substitute for organic growth, without building a repeatable muscle for integrating architectures and operating models.

The result is a strange kind of weakness. On paper, you own more assets than ever. You have more plants, more SKUs, more systems, more platforms, more data. In practice, you feel slower, more constrained, more fragile.

The reason is simple. We spent thirty years optimizing around making. Optimizing the spreadsheets, the contracts, the presentations about operations, the compliance, the stack of tools sitting between the real work and the real customer. In the process, many large companies forgot how to make things with conviction and how to grow without buying someone else’s story.

The 20-20-60 model is the cleanest way to see this. Twenty percent of productivity is doing the right things. Choosing the right products, markets, sites, and acquisitions. Twenty percent is doing those things right. Process discipline, engineering, quality, and integration. The craft. Sixty percent is staying focused on those things without being dragged sideways by structural distraction.

Most executives talk endlessly about the first twenty. Strategy offsites, M&A decks, capital plans. Some still care about the second twenty. Craft and discipline. Almost nobody owns the sixty.

That is where the architecture of burden and the permission staircase live. That is where Gen1 AI and SaaS erode your ability to operate by turning every role into a part time systems integrator. That is where complexity and latency pile up until your people cannot tell whether they are working for the customer or for the internal machine.

In 2026, the gap between companies that repair the sixty and those that ignore it will explode into the open. You will see it in reshoring. The lazy version of reshoring builds a new plant and drops the same stack of approvals, menus, and reports on top of it. The productive version designs the staircase and the agentic layer first, then wraps concrete and steel around it.

You will see it in M&A. The lazy version buys a company for its growth and then suffocates it inside the parent’s stack. The productive version treats architecture and decision speed as assets. If the acquired business has a better staircase and a cleaner edge operating model, the parent integrates into that pattern rather than forcing the smaller company into its friction.

You will see it in talent. The best operators, engineers, architects, and product leaders will migrate to the places where it is possible to do excellent work without being buried under nonsense. The others will keep posting AI transformation announcements while their best people leave.

By the end of 2026, efficiency will have a new, unforgiving measurement. How many decisions that matter can move from sensing to acting in hours, not weeks. How much of your workforce’s cognitive capacity is pointed at the customer, the product, and the asset, rather than at the tools that sit in between. How quickly you can translate industrial policy and capital incentives into real, functioning, reliable capacity without years of death by meeting.

If your executives can speak fluently about AI, M&A, and digital but cannot explain how a real decision moves on a Tuesday night in a real plant, you have your answer. You have not solved efficiency. You have renamed the problem.

What To Do Before 2026 Gets Away from You

Predictions are entertainment unless they turn into an operating agenda. Here is where you start, without another committee.

Start with one permission staircase. Pick a decision that matters, a safety intervention, a quality escape, a major customer concession, a pricing move. Map every step from signal to action. Name the handoffs, the delays, and the “we always do it this way” points. Then choose, in daylight, where agents could safely take a step under clear rules, and where human judgment must remain.

Next, find the worst piece of your architecture of burden. Identify the single role in your company that touches the most systems in a day. Stand behind them for an hour and count the clicks. Then commit, in writing, to remove half that burden within twelve months using an agentic surface that sits above the stack you already own. Not because the UX matters. Because the subscription cost of readiness is bleeding you in plain sight.

Then deploy one automated scientist in anger. Choose one variable that really matters, yield on a critical line, mastery in a grade, uptime on a bottleneck asset. Give an automated scientist enough permission and data to watch, propose tests, and tune under guardrails. Judge it by outcomes, not presentations.

Then redefine efficiency with your board. Stop reporting the number of AI projects and systems implemented as progress. Start reporting decision latency, burden removal, and focus protection as core productivity metrics, because they tell the truth about whether your enterprise can still make, build, and grow.

Finally, force every AI and SaaS dollar through the simple test. If this spend does not materially shrink the permission staircase, absorb burden, or preserve optionality by paying the subscription in the system rather than in humans, it is comfort dressed up as strategy. Fund accordingly.

You do not have to believe these predictions. Your competitors only have to act on them.

The companies that will look back on 2026 with satisfaction will not be the ones that shouted the loudest about AI. They will be the ones that rebuilt the architecture of how work, permission, and learning move through their organizations, and who used agentic AI to strip away the rust that has accumulated over thirty years of well-intentioned complexity.

Everyone else will still be admiring their dashboards.

References

This piece is anchored in the operating doctrine we have been developing through The COO Council work on decision architecture, operating cadence, and measurable decision latency, including the practical lessons emerging from the Council’s workstreams on M&A and integration discipline with Veena Lakkundi, culture as operating code with Thomas Roemer, AI as runtime governance and decision rights with Jim Beilstein, and safety as weak-signal learning and prevention architecture with Ellis Allen Jones. It also draws from LNS Research’s broader productivity benchmarking and operating model research that treats sustained performance separation as a systems problem, not a tooling problem. The mechanism in the essay extends Michael Carroll’s published and in-progress frameworks on degrees of separation, decision latency as strategic liability, architecture of permission, the subscription cost of readiness, and the automated scientist loop as the practical unit of compounding learning. The intellectual spine is reinforced by foundational work on bounded rationality and organizational decision-making, including Herbert Simon’s framing of limited attention and satisficing, and Cyert and March’s Behavioral Theory of the Firm on routines, negotiated reality, and why organizations default to locally reasonable behavior that becomes globally unreasonable under pressure. It is also informed by research on exploration versus exploitation and the conditions under which learning decays when incentives reward short-cycle closure, by reliability and safety scholarship that treats near misses and drift as the real data of prevention rather than posters and slogans, and by causal inference and intervention logic, including Judea Pearl’s causal ladder and the operational distinction between observing patterns, explaining mechanisms, and choosing interventions that preserve optionality until exercising becomes rational.

 
 
 

Comments


bottom of page