[{"data":1,"prerenderedAt":114},["ShallowReactive",2],{"blog-orchestration-era":3},{"id":4,"title":5,"body":6,"date":105,"description":106,"extension":107,"meta":108,"navigation":109,"path":110,"seo":111,"stem":112,"__hash__":113},"blog/blog/orchestration-era.md","The Orchestration Era Is Transitional: What Happens When AI No Longer Needs Our Scaffolding?",{"type":7,"value":8,"toc":94},"minimark",[9,13,20,25,28,31,35,38,41,45,48,51,55,58,61,65,68,71,75,78,81,85,88,91],[10,11,12],"p",{},"After 25 years of building software and leading remote teams, I've seen plenty of technological transitions. Some innovations become permanent fixtures in our architecture, while others turn out to be temporary scaffolding that eventually gets removed. Right now, as I watch the AI development landscape from my work in property tech, I'm increasingly convinced we're in the middle of one of those transitional periods. We're currently in what I'd call the \"orchestration era\" of AI development, and the interesting thing is that orchestration itself may be transitional.",[10,14,15],{},[16,17],"img",{"alt":18,"src":19},"Illustration of external scaffolding being absorbed into an AI model as its capabilities grow","/blog/orchestration-era.png",[21,22,24],"h2",{"id":23},"what-were-building-right-now","What We're Building Right Now",[10,26,27],{},"Look at what's happening across the AI development ecosystem today. Teams everywhere are building planner agents that decompose complex tasks into manageable steps. We're creating reviewer loops that verify outputs before they're considered final. We're implementing memory systems that help models maintain context across interactions. We're designing decomposition workflows that break down ambitious goals into executable chunks. We're constructing execution graphs that map out how different AI components should interact. We're developing prompt hierarchies that guide models through increasingly complex reasoning chains. And we're building verification chains that catch errors before they propagate through our systems.",[10,29,30],{},"This isn't random activity or hype-driven development. All of these orchestration patterns exist for a very specific reason: we are manually externalizing cognitive structure that the models do not yet reliably internalize. Every one of these systems is a workaround, a compensation mechanism for fundamental gaps in current AI capabilities. We're not building orchestration because it's elegant or because it's the ideal architecture. We're building it because it's necessary given where the models are right now.",[21,32,34],{"id":33},"the-gaps-were-compensating-for","The Gaps We're Compensating For",[10,36,37],{},"The orchestration patterns we see everywhere aren't arbitrary. They map directly to specific weaknesses in current language models. We build complex orchestration systems because we're compensating for inconsistent reasoning, where the same prompt might produce wildly different quality outputs depending on subtle variations in phrasing or context. We're working around weak long-horizon planning, where models struggle to maintain coherent strategy across multi-step processes. We're addressing unreliable execution, where models might understand what needs to happen but fail to follow through consistently. We're fighting context degradation, where important information gets lost or deprioritized as conversations grow longer. And we're patching over the lack of persistent state, building external memory systems because models don't naturally maintain continuity between sessions.",[10,39,40],{},"Here's what makes this particularly interesting from a builder's perspective: the models are already trying to do these things internally. If you watch how modern language models work, you'll see them attempting to think through problems, making attempts at solutions, reflecting on their outputs, revising their approaches, retrying when they sense something isn't right, and even trying to self-correct when they catch their own errors. The cognitive machinery is emerging inside the models themselves. The orchestration we build externally is really just our attempt to make these internal processes more reliable and consistent.",[21,42,44],{"id":43},"why-everything-feels-repetitive","Why Everything Feels Repetitive",[10,46,47],{},"If you've been paying attention to the AI tooling ecosystem lately, you've probably noticed something: it feels repetitive. New frameworks get announced every week, and they're all solving remarkably similar problems in remarkably similar ways. This isn't because developers lack creativity or because the space is saturated. The ecosystem feels repetitive right now because everyone is reinventing the same planner/reviewer/executor patterns. We're all compensating for the same underlying cognitive gaps using the only tools we have available.",[10,49,50],{},"The orchestration era emerged because models became just capable enough that external cognitive scaffolding started producing meaningful gains. Models crossed a threshold where they could follow complex instructions but couldn't reliably generate those instructions themselves. They became good enough to execute plans but not quite good enough to plan autonomously. This created a sweet spot where human-designed orchestration could dramatically improve outcomes. That's where we are right now, and that's why the patterns look so similar across different teams and products.",[21,52,54],{"id":53},"the-uncomfortable-question","The Uncomfortable Question",[10,56,57],{},"But the deeper question is what happens as the models improve. Every major capability jump we've seen has reduced the need for explicit orchestration in predictable ways. Better reasoning reduces prompt complexity because models need less hand-holding to understand what you want. Larger context windows reduce memory fragmentation because models can hold more information natively without external memory systems. Better tool usage reduces routing logic because models can figure out which tools to use without elaborate decision trees. Better self-correction reduces reviewer loops because models catch more of their own errors. Stronger planning reduces agent decomposition because models can handle more complex tasks end-to-end.",[10,59,60],{},"This raises an uncomfortable possibility for those of us building in this space: a large percentage of today's agentic workflows may ultimately be temporary cognitive prosthetics. I want to be clear about what I mean here. The orchestration we're building is important. It's useful. It's even necessary right now to get production-quality results from AI systems. But that doesn't mean it's permanent. The most critical systems we build aren't always the ones with the longest lifespan. Sometimes the most important work is building the bridge that lets us get to the next stage, even if that bridge eventually gets torn down.",[21,62,64],{"id":63},"the-next-six-to-twelve-months","The Next Six to Twelve Months",[10,66,67],{},"Over the next six to twelve months, I suspect we'll continue seeing more orchestration experimentation because it's one of the few layers developers still fully control. We can't improve the base models ourselves (unless we're one of the handful of companies with the resources to train foundation models), but we can absolutely build better orchestration. We can experiment with different agent architectures, test new prompt patterns, optimize our reviewer loops, and refine our execution graphs. This gives us agency in a landscape where so much is determined by model providers.",[10,69,70],{},"At the same time, the models themselves will continue absorbing more of the cognitive structure we currently build externally. Each new model release tends to internalize capabilities that previously required external orchestration. GPT-4 handled tasks that required elaborate multi-agent systems with GPT-3.5. The pattern repeats with each generation. The models are eating the orchestration layer from the bottom up, incorporating what were once external patterns into their internal processing.",[21,72,74],{"id":73},"where-the-center-of-gravity-shifts","Where the Center of Gravity Shifts",[10,76,77],{},"And if that happens—when that happens—the center of gravity shifts dramatically. The competitive advantage stops being about prompts, agent graphs, orchestration tricks, or elaborate ten-agent workflows. Those become table stakes, or worse, unnecessary complexity. Instead, the competitive advantage becomes about context quality: how well you can provide relevant, accurate information to the model. It becomes about organizational memory: how effectively you capture and surface institutional knowledge. It becomes about execution environments: the tools and systems the model can interact with to get things done. It becomes about evaluation systems: how you measure whether the AI is actually doing what you need. It becomes about verification pipelines: how you catch errors and ensure reliability. And it becomes about proprietary knowledge integration: how you connect your unique organizational context to increasingly powerful general intelligence.",[10,79,80],{},"In other words, the future may not belong to whoever builds the most elaborate orchestration layer. It may belong to whoever builds systems that most effectively align organizational context with increasingly autonomous cognition. The value shifts from the scaffolding to the foundation, from the wrapper to the context, from the orchestration to the integration.",[21,82,84],{"id":83},"building-for-the-transition","Building for the Transition",[10,86,87],{},"As someone who's spent decades building software systems that need to evolve and adapt, this transition feels familiar. The key isn't to avoid building orchestration right now—we need it to ship working products today. The key is to build with the awareness that the orchestration may be temporary. That means investing in the parts that will matter long-term: your data infrastructure, your evaluation capabilities, your domain expertise capture, your execution environments. Build the orchestration you need today, but build it in a way that doesn't become a trap when the models improve.",[10,89,90],{},"The orchestration era may not be the destination. It may be the bridge. And if you're building in this space right now, the question isn't whether to build that bridge—you should—but rather what you're building it toward. Are you optimizing for today's constraints, or are you positioning yourself for a world where those constraints matter less? Are you building systems that get more valuable as orchestration becomes less necessary, or are you betting everything on orchestration staying complex forever?",[10,92,93],{},"These are the questions I'm thinking about as I build. The answers will determine which projects from this era become lasting infrastructure and which become interesting footnotes in the history of AI development.",{"title":95,"searchDepth":96,"depth":96,"links":97},"",2,[98,99,100,101,102,103,104],{"id":23,"depth":96,"text":24},{"id":33,"depth":96,"text":34},{"id":43,"depth":96,"text":44},{"id":53,"depth":96,"text":54},{"id":63,"depth":96,"text":64},{"id":73,"depth":96,"text":74},{"id":83,"depth":96,"text":84},"2026-05-08","Planner agents, reviewer loops, memory systems, decomposition workflows — every team is reinventing the same patterns because models can't yet internalize them. But each model release absorbs more of that scaffolding, and the long-term advantage shifts from orchestration to context, evaluation, and integration.","md",{},true,"/blog/orchestration-era",{"title":5,"description":106},"blog/orchestration-era","LBajHFNXIM4UJtDh2O1r6K4H1M30qMIkKiN5B6KyXYg",1778252406163]