"Boring Technology" Revisited in the Era of Artificial Intelligence
I feel that the boring technology philosophy, which was born well before the AI era, has gained a new relevance now that coding assistants are everywhere. Aaron Brethorst, in Choose Boring Technology, Revisited, argues that McKinley's fundamental principles still apply, but tools like Copilot and Claude have brought a new danger into the equation. The reality is that AI doesn't make us need stable technologies any less; on the contrary, it only shows how the risk of ignoring them has increased.
AI Coding Assistants: The New Obstacle
The most significant change I've seen in the industry since McKinley's original text has been this invasion of language model-based assistants. They are amazing at generating code that looks super professional for almost anything, from a complex Kubernetes microservice to a GraphQL federation. But therein lies the danger: we end up generating code for technologies we don't master.
The problem is that these models can "hallucinate" technical details. They deliver code that looks perfect on the surface, but is full of subtle flaws deep down. Brethorst mentions he has seen people accepting AI suggestions that came with APIs that no longer even exist, security holes, or performance issues that would only show up when the system was actually running in production. After all, how are we going to know something is wrong if the code follows all conventions, looks professional, and "looks right"? Only someone who truly knows the technology can catch these errors.
"Unknowns Multiplier Effect"
When we combine a technology we don't know with AI-generated code, we create a scenario of exponential uncertainty. I have already found myself in situations where:
- I didn't know if that framework was actually the ideal one for the problem.
- I wasn't sure if the AI was following best practices.
- I couldn't separate what was boilerplate code from the essential business logic generated by the AI.
- And worse: I had no idea what kind of errors I should expect.
Brethorst compares this to "cargo-culting times 2,356," which clearly shows how the risk has been amplified. It is as if we were blindly trusting two systems we don't understand: the new technology and the AI itself. This code that "looks right" ends up masking the fact that we don't know what we are doing, and this turns future debugging into an endless nightmare.
AI: A "Force Multiplier"
On the other hand, when we use AI with a technology we master—that familiar and "boring" part of our stack—the situation changes. Then, the AI becomes a powerful "force multiplier." This is where I see "Productive Ignorance" truly working: we let the AI handle that repetitive and boring syntax, but we have the necessary knowledge to audit everything it delivers.
Brethorst himself mentions he trusts Claude to generate Rails code because he knows the framework inside and out. If the AI suggests something weird, he catches it right away. I do the same with Copilot for JavaScript; since I understand the nuances of the language, I can validate whether what was generated makes sense. In this scenario, the AI takes on the heavy lifting, and we focus on what matters: the architecture and the business logic. Our experience becomes the essential filter for us to gain productivity without putting ourselves at risk.
This duality reveals a new reality in the AI era: having deep experience in a tech stack has become even more important. Before, if you tried to use something you didn't know, the code would come out visibly messy or wouldn't even work. Now, AI tools can generate pretty, well-formatted code that hides subtle flaws.
Because of this, the crucial skill is no longer just writing code, but the ability to audit it. It is being able to look at a block of code and say: "This looks correct, but it is wrong because of this and that." And we only gain that eye by training exhaustively on what is "boring."
I believe that deciding which technology to use in the AI era requires much more caution. In the next article, we will see how to evaluate these choices in a systematic way, thinking not only about what the technology does, but its impact on the security and maintenance of the system in the future.