Strategic Framework for Technological Decisions
For those of us leading technology teams, it is not enough to understand the philosophies of boring technology or rapid iteration merely in theory. We know that everyday reality, especially now with AI, demands something much more tangible than abstract concepts. Can anyone scale an operation based solely on good intentions? We need a practical framework, policies that we can actually apply to build an engineering culture that works and to adopt new technologies in a systematic way.
Effective Engineering Culture
We believe that culture is the foundation of any strategy meant to last. As leaders, our role is to promote stability and speed, but this starts with redefining what we call success. Instead of only applauding those who bring in the newest tool on the market, we must value the teams that consistently deliver business value. We need to reward those who maintain stable systems and who simplify code, choosing "boring" solutions that, ultimately, get us to our goal faster.
Communication is key here. We have to explain the reasoning behind every choice, connecting the use of less "trendy" technologies to direct benefits: fewer middle-of-the-night on-call pages, a smoother workflow for the team, and more free time for us to focus on what truly brings innovation to the product.
Psychological Foundation: Productive Ignorance
For any of our strategies to stand on their own, we need a strong psychological foundation, something Kent Beck calls "Productive Ignorance." We seek to cultivate an environment where engineers feel safe admitting they don't know everything. After all, isn't it much better to value focused learning over that endless rush after every new thing that pops up?
We put productive ignorance into practice in a few ways:
- Celebrating Mastery: We recognize those who master the technologies we already use and show how valuable that knowledge is.
- Normalizing Uncertainty: We encourage questions and, as leaders, we also admit when we don't have all the answers.
- Focusing on Targeted Learning: We invest time so teams can dive deep into what is strategically important, avoiding getting lost in aimless explorations.
Without this mindset, we run the risk of seeing any process as a bureaucratic burden, when it should be a tool for our success.
A Governance Process for New Technologies
We want to prevent our technology stack from growing in a disorganized way, so we use a formal process to evaluate what is brought in-house. This filter isn't meant to slow us down, but to ensure we are making smart choices.
The steps we follow are:
- Clear Problem Definition: Before anything else, we require a written justification: why doesn't what we have today solve the problem? If the idea is to use something new just for the sake of it, we simply don't move forward.
- Low-Risk Implementation: If the proposal passes, we test it in a non-critical area. We want the team to gain real experience and prove the tool's value before we even consider rolling it out further.
- Replacement Commitment: If the new tool does the same job as an old one, we demand a plan to migrate and deprecate the previous one. We cannot allow complexity to increase just because someone found a "better" solution for a specific corner of the system.
Guidelines for the AI Era
With AI in the game, we needed to refine our governance even further. We follow a few rules that act as an essential quality control, as proposed by Brethorst. These rules serve to guide us:
- Reviewability: Before adopting anything, we ask ourselves: can we truly review the code that AI will generate for this technology? If the answer is no, we don't use it for anything critical.
- Deep Understanding: If we decide to spend an "innovation token," we commit to understanding the technology deeply. We know that copying and pasting what AI suggests without understanding it is a dangerous path, right?
- Avoiding Simultaneous Adoption: We don't let AI become an excuse to bring in multiple novelties at the same time. We know that mixing new languages, frameworks, and infrastructures all at once multiplies the risks and removes our ability to verify if everything is correct.
Technology Adoption Decision Matrix
To help us visualize and communicate these decisions, we use a very straightforward matrix. It maps our choices across two dimensions: how strategically important it is to the business and how mature the technology is.
| New Technology | "Boring" Technology | |
|---|---|---|
| High Strategic Importance | Strategic Bet: The Innovation Token We invest our "innovation token" here. It is for when we see that a technology can give us a real competitive advantage. But keep in mind: this requires us to research heavily and manage the involved risks well. | The Technological Core This is our foundational tech stack, the set we master. We focus on optimizing this for our business, ensuring we have agility and confidence. Here, AI steps in as a powerful right hand. |
| Low Strategic Importance | The Danger Zone: High Risk, Low Reward We steer clear of this. Adopting new technology in something of no strategic importance means creating operational debt without gaining anything in return. Who wants to take that risk for nothing? | The Utility Belt: The Smart Choice for Non-Critical Solutions For things that are not critical, we take the simplest path. We prioritize what we have already tested and what works fast, keeping complexity down to a minimum. |
This matrix helps us ask the questions that matter: "Is this really a differentiator for us?" and "How big is the risk?" This way, we turn a philosophy into a repeatable and defensible process, aligning our decisions with the company's goals.
We believe this framework creates the foundation for much more conscious decisions. In the next article, we will see how all this comes together in what we call Deliberate Constraint, a practical model to balance innovation and stability in our day-to-day development.