Software engineers have always been forced to commit the category error of shipping patterns as components. It was our only choice. But AI coding agents can change this.
Imagine you are building an internal component library. A rigid API works fine for simple components like buttons that can only be used in a handful of valid ways. But on the other end of the spectrum, you have more complex components, like a drawer. As a content container, a drawer can be used in an almost infinite number of valid ways. You, or your UX designer, might have a template in mind for what content should be allowed, but this cannot be captured by a deterministic rule. You might know whether some content fits your organization's design and branding guidelines when you see it, but you can't possibly anticipate and enumerate all the valid possibilities preemptively. Encoding them in a rigid API is futile. Your API will never satisfy the long tail of user needs.
The core problem is that the definition of what makes a good drawer layout is not component-shaped, it is pattern-shaped. The distinction matters. A component is a fixed artifact. A pattern is the thing that produces artifacts: each one slightly different, each one fitted to what the consumer needs. Unlike a component, a pattern naturally supports a great variety of solutions contained within a narrow correct range.
You cannot install a pattern as an npm package, so we've always been stuck shipping patterns as components. However, increasingly, AI coding agents promise to change this. Instead of having to pull in a package and bend it to our will through an API, we might be able to pull in most of the needed code plus a markdown file that tells an AI agent how to do the rest. I think in the next few years we will see a shift from purely composing components (configurable modules, packages) to also composing patterns (generative templates that AI agents can complete).
When most engineers hear "pattern", they think of something like Factory, Singleton, or Observer. But the Gang of Four did not invent the design pattern, they were inspired by a concept coined by the architect and urban designer Christopher Alexander in the 1970s. Alexander was frustrated with the component-based thinking of modern architecture, with its narrow focus on functionality and standardization. He wanted to think of buildings as being generated from bundles of timeless patterns: 180. Window Place, 159. Light on Two Sides of Every Room, 111. Half-hidden Garden etc. GoF patterns are textbook approaches to common problems, standard solutions. To Alexander on the other hand, patterns are not just solutions, but solution generators. A pattern is a mapping from a context and a common problem to a solution, but it leaves the exact form of the solution open. The same pattern, applied in different contexts, produces slightly different results. Just like our drawer is different in each use case. Trying to design a single standard window component for all buildings ignores the local texture. But re-instantiating 180. Window Place in each individual building fits just right. You can't preempt exactly how you'll end up applying a pattern in each use case, but you can usually feel whether it's correct when you see it.
His favorite example was Paris. The city, in all its variety, shares a small number of shared patterns: courtyards opening onto streets, buildings of a certain height, balconies at particular floors. These patterns were all applied independently, thousands of times, by different builders. The result is a city with extraordinary coherence and extraordinary variety at the same time. No two buildings are identical, but they all clearly belong to the same place. A great variety of designs in a narrow range. And crucially, you cannot get there with components. Components give you uniformity but can never generate the same variety you get with a pattern.
Really, even Alexander didn't even invent the pattern language. It's core to God's source code for life itself. Humans share over 99.9% of their DNA with each other and 98% with chimpanzees. That 0.1% is what makes you different from everyone else. Your eyes, your build, and your face aren't reconfigured components. Instead, they're the result of forking and combining the patterns your parents carried, and generating a new artifact with an adjusted version of your mom's nose and your dad's ears. DNA is a solution generator, a pattern language. And all of humanity, in our great diversity, exists within that narrow 0.1% range.
Unfortunately, when we copied the idea of the design pattern from architecture to software engineering, we flattened it to fit our medium. We lost the generator piece. We had to. Human-written code could not quite support what Alexander envisioned for architecture. Code could not regenerate itself to fit the exact needs of every consumer; that would be unmaintainable. Instead, we ship components: black boxes with fixed APIs that consumers configure. But, and this is the core point of this essay, sufficiently advanced AI coding agents remove that restriction.
Let us return to the internal UI component library problem. Shadcn was an early bet on patterns. Shadcn's contrarian approach was to ship patterns directly by letting you copy their code, like good drawer examples, and modify it to your needs. Historically, this was seen as more of a cop-out than anything else, because it simply meant that the full maintenance burden would now fall on the consumer.
But I think Shadcn bet that AI would soon make the maintenance cost negligible. Software engineering has never just been about shipping code; it is about shipping knowledge. If you fork code you might not be able to deterministicly pull patches from an npm package upstream, but an AI could still pull in new knowledge via an updated markdown file and then update your forked code to incorporate it. This moves library maintainers from shipping a static component, to shipping a static pattern, to shipping a primitive generative pattern.
Andrej Karpathy posted a perfect example of this last week. Talking about NanoClaw setup, he noted:
"I also love their approach to configurability - it's not done via config files it's done via skills! For example,
/add-telegraminstructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration."
Andrej Karpathy
Notice how this is just like Christopher Alexander's idea of a pattern. The skill markdown file and code template together form a mapping from a context and a problem to a solution, but leaves the exact form of the solution open. The AI coding agent can then use that pattern to create exactly the right solution for the consumer. Just like each builder in Paris could use the pattern "courtyards opening onto streets" to create something that perfectly fits the building they are working on, the AI agent in your terminal can use the /add-telegram pattern to generate exactly the correct integration for you.
Any engineer worth their salt is already thinking about the maintenance burden as I write this. The suggestion feels almost viscerally wrong (like mixing HTML and JS before React!). Maybe the /add-telegram approach works as a one-off, but now you own this code. Previously it was outsourced; now it becomes a liability. How will you maintain it without an upstream package you can simply bump to pull in the latest knowledge?
On the other hand, is it truly unreasonable to bet that AI would eventually be able to handle this safely too? Imagine a simple 1000-line library that ships as a pattern, similar to the /add-telegram example: a code template paired with a skill.md file telling the agent how to adapt it. The AI instantiates the pattern by forking the template and writing code to exactly fit your use case. Over time, the upstream publishes new versions as an updated code template plus skill.md, and your AI installs them by repeating the process.
For a thousand-line dependency, this is almost trivially easy, even today (we're already seeing it with /add-telegram). We've proven the base case. That means what remains is a scaling problem. It's a hard scaling problem, but hard scaling problems are exactly what AI has been solving, predictably, for a decade.
To a sufficiently advanced AI, the entire codebase of a dependency is more legible than its API is to a human. The future of tech is unpredictable, but one thing is almost constant: any moat built on the assumption that compute won't scale is bound to fail. I wouldn't bet against the scaling laws.
If this is true, it is really just a matter of time before we can safely ship our most complex libraries as patterns. Of course that doesn't mean everything will ship as patterns. Just because AI can cheaply maintain a forked button library doesn't mean it should. Buttons are component-shaped and they work just fine through a rigid API. They'll continue shipping as such. But your drawers, in their pattern-shaped complexity, won't.
We'll always ship components, but we'll also have a new tool in our toolbox: for those pattern-shapped use-cases where the rigid API has always fallen short, where correctness is consumer-dependent and fuzzy, we will be able to ship our libraries as patterns instead. If this works, I think we can look forward to an era of beautifully designed highly customized software. SaaS that actually addresses the long tail of user needs. Agencies that scale. That was Christopher Alexander's dream for architecture: a pattern-based world where every house is adapted to fit its inhabitants exactly. Historically this has been almost impossible to scale, whether you're building with atoms or bits.
But I think AI can change this for the bits.