Skip to content

"Boring Technology" Revisited in the Age of Artificial Intelligence

The philosophy of boring technology, created before the AI era, gains a new and even more crucial relevance with the rise of AI-powered coding assistants. Aaron Brethorst, in "Choose Boring Technology, Revisited", argues that while McKinley’s core principles remain valid, tools like Copilot and Claude add a new and dangerous dimension to the equation. AI does not negate the need for boring technology; on the contrary, it amplifies the risks of ignoring it.

AI Coding Assistants: The New Obstacle

The most significant change in the industry since McKinley’s original essay is the proliferation of coding assistants based on large language models (LLMs). Although these tools are extraordinarily proficient at generating plausible, professional code for a wide range of technologies—including complex implementations like Kubernetes-based microservices or GraphQL federation—that ability introduces a new danger: generating code for a technology the engineer does not understand.

The problem is that LLMs can “hallucinate” technical details, producing code that at first glance looks correct but is fundamentally flawed in subtle ways. Brethorst reports observing engineers accepting AI-generated code that contained deprecated APIs, security antipatterns, or performance issues that would only appear under production load. The deceptive nature of such code is that it “looked right”—it followed naming conventions, included reasonable error handling, and appeared professional. Only someone with deep knowledge of the underlying technology would detect the flaws.

"Unknowns Multiplier"

Combining an unfamiliar technology with AI-generated code creates a scenario of exponential uncertainty. The engineer is placed in a precarious position where:

  • They cannot determine whether the chosen framework is appropriate for the problem.
  • They are unsure whether the AI’s implementation follows best practices.
  • They cannot distinguish boilerplate from the essential business logic generated by the AI.
  • They do not know which failure modes they need to anticipate.

Brethorst compares this to “cargo-culting times 2.356,” emphasizing the greatly amplified risk. The engineer places blind trust in two systems they do not understand: the new technology and the LLM. The misleading confidence provided by professionally appearing AI code masks a deep lack of understanding, turning future debugging into a real nightmare.

AI: "Force Multiplier"

Using an AI assistant together with a technology the engineer masters—a “boring,” familiar part of their stack—transforms AI from a risk into a powerful “force multiplier.” This is where “Productive Ignorance” is most effective: the engineer lets the AI generate the exact syntax for repetitive code while relying on deep knowledge to audit the output.

Brethorst exemplifies this: he trusts Claude to generate Rails code because of his familiarity with the framework, enabling him to quickly spot problematic suggestions. Similarly, he uses Copilot for JavaScript because he understands the language’s nuances and can verify what’s generated. In this scenario, AI takes on the heavy lifting of boilerplate code, freeing the engineer to focus on architecture and business logic. The engineer’s experience serves as an essential safeguard, allowing them to enjoy AI productivity while minimizing risk.

This duality reveals a new reality in the AI era: deep experience in a technology stack has grown in importance. AI has changed the nature of risk in development. Before AI, an engineer working with an unfamiliar framework would likely produce code that was visibly messy or nonfunctional. Now, AI tools can generate syntactically correct, well-formatted code that nonetheless hides subtle faults.

Thus, the crucial competence shifts from merely writing code to auditing it, requiring deep domain knowledge. The ability to analyze an AI-generated code block and identify “This looks right, but it’s wrong because of X, Y, and Z” has become a high-value skill. That auditing ability can only be developed through hands-on, deep experience with a technology stack—the essence of mastering a “boring” technology.

Technological decisions in an AI-dominated era require even more caution and deliberation. In the next article, we will explore how to evaluate and select technologies systematically, considering not only their technical capabilities but also their long-term impact on system maintainability and security.