Skip to content

The Boring Wisdom: Deconstructing "Choose Boring Technology"

Dan McKinley's "Choose Boring Technology" philosophy is often seen as a repudiation of progress or a call to Luddism, but that is a mistaken interpretation. In fact, it represents a refined approach to managing risk, optimizing resource allocation, and driving long-term organizational performance. Its true value is not in rejecting the new, but in consciously choosing what is predictable and stable.

Defining "Boring": Predictability and Productive Ignorance

To understand this philosophy, the first step is to redefine "boring." In McKinley's vocabulary, "boring" does not mean "bad," "uninteresting," or "outdated"; it means "well understood." A boring technology is one whose capabilities and, more importantly, failure modes are known by the team and by the industry. Technologies like PostgreSQL, Python, PHP, and cron are "boring" not for lack of power, but because their behavior under stress, their limitations, and their operational quirks have been documented, discussed, and resolved over years or decades of production use.

This aligns directly with the psychological principle Kent Beck calls "Productive Ignorance." In a constantly changing environment, pretending to know everything is ineffective. Productive ignorance is the stance of accepting uncertainty and prioritizing focused learning. Choosing "boring" technologies is a strategic application of that perspective. The team acknowledges its lack of knowledge about the latest innovations and decides to operate in a space where its expertise is solid, thereby directing its learning energy toward the business problem at hand.

Predictability is crucial to minimize surprises in software engineering, an already complex discipline. New technologies increase that complexity exponentially, introducing "unknown unknowns"—problems that are completely unforeseeable, as opposed to "known unknowns" that can be anticipated and planned for, like a network partition.

Innovative technologies are full of unknown unknowns: you don't know how they will behave at scale, what kinds of unexpected garbage-collection pauses they might cause, or what subtle security vulnerabilities they might harbor. In contrast, a technology considered "boring" has already turned most of its unknown unknowns into known unknowns through widespread use. The community has found the failures, and the solutions are easily accessible, often with a simple Stack Overflow search.

This predictability drastically reduces the cognitive load on the engineering team, allowing them to focus on solving business problems instead of spending time debugging infrastructure.

The Economy of the "Innovation Token"

To contextualize adopting new technologies, McKinley introduces the concept of "innovation tokens." He suggests each company has a limited number of these tokens—perhaps three—that represent its capacity to do creative, risky, or challenging work. These tokens are a scarce strategic resource and should be "budgeted" with the same attention given to financial capital.

This metaphor turns technology choice from a purely technical decision into a strategic portfolio allocation decision. Spending an innovation token on something fundamental—like a database (for example, choosing MongoDB when the team already knows MySQL) or a new programming paradigm (adopting Node.js in a company that uses Ruby)—is a high-risk investment in an area that doesn't differentiate the business. A company doesn't gain a competitive advantage from having a "modern" database; its advantage comes from the product built on top of it. By spending an innovation token on infrastructure, the organization exhausts its capacity to innovate where it truly matters: in the product features customers perceive and value.

Thus, managing the technology portfolio, from this perspective, becomes an exercise in risk management. A technology leader's role is to allocate the company's risk budget effectively. Choosing boring technology for fundamental, non-differentiating systems is analogous to selecting stable, low-risk assets for the base of an investment portfolio. This approach stabilizes the overall portfolio and preserves the "risk capital" (the innovation tokens) to be invested in high-risk, high-reward bets that directly impact the company's market position.

The Real Cost of Technology: Cognitive and Operational Overhead

Choosing technological solutions—often driven by the search for the "best tool"—can lead to myopic optimizations with significant systemic costs. McKinley emphasizes that the true cost of a technology goes beyond initial development, encompassing the total cost of ownership, which includes operational and cognitive overhead.

This overhead appears as the need to create new monitoring strategies, develop unit tests, configure startup scripts, train new engineers, and, crucially, the cognitive load of managing one more system. Every new addition to the tech stack fragments the team's knowledge and attention. The case of Etsy, mentioned by McKinley in his essay, illustrates this trap: hiring Python programmers resulted in a "useless intermediate layer" that took years to remove. That local optimization (leveraging programmers' skills) produced massive global inefficiency and delayed the company's success.

Conversely, deliberate containment can yield long-term benefits. Etsy's activity feeds, for example, were built on a "boring" stack (PHP, MySQL, Memcached, Gearman). Although the initial implementation was more complex than it would have been with a modern tool like Redis, this choice allowed the feature to scale 20x over several years without constant attention. The underlying stack's stability and familiarity ensured it ran reliably in the background, freeing engineers to focus on more pressing problems. The lesson is clear: the long-term operational costs of a technology often outweigh the convenience of short-term development.

Understanding and applying the "Boring Technologies" philosophy is a crucial first step in building robust and scalable systems. However, this approach often faces resistance when confronted with the pressure for speed and innovation. In the next article, we will explore how to balance this apparent contradiction between stability and speed in software development.