Handling unreasonable expectations
- Sergio Visinoni from Sudo Make Me a CTO <makemeacto@substack.com>
- Hidden Recipient <hidden@emailshot.io>
Handling unreasonable expectationsA lot of unreasonable expectations are thrown around these days. In this article I offer concrete tips and examples that will help you manage them. Results are not guaranteed. At least I don't lie.Hello everyone, A common theme I’ve heard from people I work with or simply exchange with is that they either have or have to manage unreasonable expectations regarding the impacts of AI on the productivity of digital companies. In today’s article I’d like to share a few simple tips to handle such situations. But before we go there, there is an important announcement! This article is sponsored by Rushomon! Wait, what? Isn’t that the thing I built myself? Well, yes. So, technically this is not a sponsor. Or rather, not yet. But it might become if more people start using it. There has been some interesting traction lately, including:
On the enthusiasm that followed these events, I’ve opened up the first paid tier, which essentially gives higher limits and fancier features, but more importantly, I’ve decided to try a Product Hunt launch. It’s the absolute first time I've done it. I probably screwed up a few things, but if you’d be so nice as to show some love, I’d be forever grateful! But please, only do so if you genuinely believe Rushomon is a cool product. Just go here and share your appreciation. Now, back to the topic of managing unreasonable expectations. The frustration of a CEOI recently had the pleasure of reconnecting with a former client, a CEO, who reached out a few weeks back, as they intend to re-engage with my services. Besides making me very happy that they would be a returning client in a business model with close to zero vendor lock-in¹, I found our recent conversation extremely interesting and revealing. You can, of course, imagine that if a CEO reaches out asking for my help, that’s a sign that they might be struggling with something and need support. They usually do not reach out when everything is going great. This case wasn’t an exception, and the CEO shared with me a few issues they’d like me to help them solve. Most of them were common ones, and one stood out particularly. They said they were frustrated because they weren’t getting the productivity improvements from the adoption of AI they’d expected. When something doesn’t meet expectations, you should always start by understanding where those expectations are coming from. That’s what I did, using my usual dose of “What do you mean?” or “Can you provide explicit examples?” types of surprisingly effective questions. It turned out to be anecdotical evidence of the simplest form. They were comparing what their established team of dozens of engineers organised across multiple teams, building and maintaining a mature product with a reasonable amount of legacy, with a greenfield startup of two software engineers. Their frustration was literally “I am approached by startups of two engineers who can build in two days what it’ll take us two months to achieve with a much bigger team²”. Raise your hands if this sounds all too familiar 🙋. To help them reassess their expectations, I used three categories of arguments. Look at things in their contextThere is a cult scene in a popular Italian movie from the late nineties that has become a surprisingly effective tool for describing what is going on in the tech space. The movie is called Tre uomini e una gamba (Three Men and a Leg), and this is the epic moment: You can turn on automated captions to get a sense of what’s going on there. In essence, Giovanni, the guy on the left, is comparing the work of a conceptual artist with what a carpenter could produce for pennies. Even though the two objects, an artistic reproduction of a leg and a functional one, could look very similar from the outside, they’re profoundly different in their essence. Comparing the two is the woodworking version of comparing apples and oranges. It goes the same for companies. Comparing an established company with hundreds of employees and a newborn greenfield startup with a handful of people can be extremely misleading. Especially if, like in this case, you wish to attribute the difference in results to a single cause, such as a perceived sub-optimal adoption of AI tools. I’ve heard the same narrow comparisons made through different generations of silver bullets: tech stacks, mobile apps, cloud computing, and now AI. The problem with these biased and narrow observations is that they tend to distract the observer from developing a holistic and more nuanced understanding of the situation. Saying “we’re not good enough with technology X” is much simpler (simplistic) to understand and communicate than, say, having to spend months analysing different variables, running controlled experiments and then using the results to determine what to do next. That doesn’t fit well on a single slide. The lessons learned from decades of technical evolutions that we seem to have forgotten with the advent of generative AI are that you can’t extrapolate laws or rules from anecdotal evidence. You can’t and should not generalise the experience of solo builders or small teams. That’s difficult, because the public discourse seems to be dominated by those voices, but it’s not impossible. It’s a matter of investing time to learn more about the subject beyond what’s served by algorithmically curated feeds or fellow tech bros who’re constantly trying to impress you or potential investors. Use real data to counter predictions and magical thinkingTiming was on my side as that conversation happened just days after I read an interesting study from the folks over at GetDX, which I mentioned in a recent article The study reported that they found a 10% increment of PR throughput across all their clients between 2024 and 2026. That’s remarkable, but it is far from the 10x or even 2x improvements that many are proclaiming. That study isn’t by any means final and should not in and of itself discourage anyone from experimenting and trying to get better. But if I had to set a reference point for expected improvements in my team, I’d rather use the most recent results for such independent studies, however incomplete they might be, than the self-reported anecdotal experience of someone I met at a conference. I’ve encountered a staggering amount of magical thinking in the tech narrative in the past few years, to a level that I’ve only seen in the crypto niche previously. Except that now it’s everywhere. We tend to conflate it with being visionary, but in fact it’s more akin to religion than science. From the most innocent assumptions of speed of development, i.e., the frequently mentioned “if the technology keeps evolving exponentially as it’s doing”, which tends to ignore the laws of physics and all prior evidence³, to the mischievous and irresponsible claims that “AI will eventually reach a level of perfection that will solve humanity’s biggest problems", such as the climate crisis, cancer, or war. Ignoring that as of today it’s only contributing to making all those problems worse⁴. Or, my favourites, all the claims that all turn around the idea that “the technology will only get better from here”, which tend to forget that technology products, especially when they become monopolies, tend to get worse, not better, for the end users. Regardless of the improvements in the underlying technology itself⁵. In fact, whenever someone suggests you have to believe in something, turn the critical thinking knob up to 11⁶, as that’s a common sign of irrational thinking. But the most important message here is that when facing bold claims and marketing pitches, you should look for data and evidence rather than accepting them at face value. And no, do not ask a chatbot to find the data for you. In particular, you should start by putting in the thought and defining what success means for you and your company. Focus on the business, not the technologyThere is something I’ve been repeating for months, and I won’t stop until I see the evidence, i.e., the money. For decades we’ve been repeating to CTOs and other tech leaders that they should get out of their technology bubble and think about the business first. That technical decisions only matter when they’re clearly in the service of their specific business. That they are business leaders that happen to use technology to help develop it. I personally agree with this view, as technology should always be a clear means to an end. Otherwise, it can turn into expensive leisure⁷. Yet, for the past three years and counting, I’ve seen countless CEOs do exactly the opposite. The only thing they seem to be able to talk about is AI, how to become AI-native, and how with AI, as if it’s some magic wand they only have access to, they’ll beat the competition. If you really hate yourself, just spend some time looking at CEOs' posts on LinkedIn from the past couple of years to get a sample. Surely, in many such brilliant minds, they think that AI will help them reduce costs and, therefore, improve margins. But they seem to have forgotten that the ultimate goal is to solve people’s and users' problems. That part seems to have completely disappeared from the dominant discussion around digital products. They’re all so excited about the new self-driving sledgehammer they have at their disposal that they’ve forgotten that their users might just need to swap their wallpaper. This moment is in fact a great opportunity for mindful leaders. If they take the time to really look at the available data, understand their user needs and come up with meticulously defined success metrics for their company, they might find ways where AI might help and plenty of other areas where different solutions will have a more effective and/or efficient result. Throwing money at a problem has never been a guarantee of success. Throwing tokens at it might not be either. And the burden of proof is still on the side of those who claim otherwise. People Have Spoken!I hope you find these three categories of tools helpful to manage what have become increasingly unreasonable expectations from a large portion of the industry leaders. In the last article I shared a poll to help me decide on the name for the upcoming regular columns for paid users. People have spoken, and though I’m surprised by the incredibly small number of people who voted over the total number of views/reads of the article, I’ll go ahead and confirm that the column will be titled “Something Big is Happening”. The April issue will be #1, because we’re nerds and we start counting at 0. And issue 0 came out a couple of weeks back: I have still to decide whether the new column will come out on a fixed weekday / date or if it’ll be less regular than the standard issues, but I’ll figure that out on the go. In the meantime, stay tuned for the upcoming #1 issue, which you won’t find anywhere else. To make sure you don’t miss it, you should seriously consider upgrading to pay right now. Just look at the CTA button below you’ve been ignoring for too long! Thanks in advance! 1 In fact, it’s actually a negative vendor lock-in, as I usually help companies level up their skills so that they will not need support in the future and are instead able to continue making progress autonomously. 2 I’m slightly paraphrasing, but that’s as close to a direct quote I can get to without disclosing potentially confidential information. I love my clients. I empathise with their struggles. This is not about putting them in the spotlight, but it’s about showing how these are common struggles many of us are facing. 3 Yes, I’m looking at you, Moore’s Law. 4 Big claims, I know. On the climate side, just look at the trends of carbon emissions for all the hyperscalers versus their original targets from the early 2020s. For cancer, the current buildup of data centers driven by mindless AI investments is causing an increased number of health issues since many of them are gas-powered. For war, just look back at the shitshow between the DoD and Anthropic/OpenAI or how Flock/Palantir are actively enabling ICE to sustain what could be considered a civil war in disguise. 5 The Enshittification book by Cory Doctorow should be a mandatory reading for all CEOs. But since I don’t like the idea of things being mandatory, I’m OK with strongly recommending it. If you want examples of product that have become significantly worse, you just have to look at what you use daily: Google, Facebook, Amazon, Apple, ChatGPT itself, etc. 6 In case you’re wondering why 11, there is a beautiful piece of pop-culture you might be missing. 7 This is true in the business context. In academia, research is important regardless of its immediate (or even future) monetary returns. Again, let’s not generalise too eagerly. You're currently a free subscriber to Sudo Make Me a CTO. For the full experience, upgrade your subscription. |
Similar newsletters
There are other similar shared emails that you might be interested in:


