Against Compromise, or, Deciding as a Team without Succombing to Entropy

Well, that’s a far fetched title, isn’t it? Let me explain. If you ask a person on the street, they’ll tend to say that compromise is the best way for people with opposing views to work together towards a better common future. You hear it in politics, you hear it in families, you hear it in workplaces. If you agree with this premise, let’s dive into why I think it may be one of the most wasteful, destructive ideas out there.

Let’s say two co-workers disagree about the setting of the temperature in the office. If one wants 22 °C (72 °F) and another wants 25 °C (77 °F), then compromise might indeed be the way forward. Set it somewhere in the middle, wear a bit more or a bit less clothing, and move on. But this can only go so far. If the difference is larger, or if one of the co-workers wants an outlandish temperature, then everyone may end up unhappy. What’s worse. people are incentivised to stake out extreme positions, in order to pull the eventual compromise towards their actual desired optimum.

These may be issues with compromise, but actually these kinds of straightforward disagreements are both uninteresting and less common than you may think. When we look at compromise in other situations, it gets much worse. Instead of the co-workers in the office, let’s think about two people, in a car, going at high speed on the highway. If one person wants to keep straignt, and the other wants to turn right and take the upcoming exit, compromising means crashing on the barrier, causing destruction and possibly death, for the people in the car and others on the highway. Of course, none of the people in the car want this, and would agree that any of the previously proposed options is better than crashing.

The right way to go would be answering the question “which option is likeliest to get us to our destination”, ranking them in order of preference, and following the first, or the second, or one in the top ten. “Crashing into the barrier” doesn’t make the top quadrillion options. The wrong way to go is to answer the question “which angle should the steering wheel be at”, each proposing a number, and ending up in the middle. In other words, to average out the steering wheel is to answer the wrong question. It is to operate at the wrong level of abstraction.

Once the question becomes “which road do we take”, then things like “but the next exit leads us to a toll road and I don’t want to spend the money” and “but google maps says there’s a traffic jam ahead of this exit and we’ll spend an extra hour waiting to go through it” become relevant information, can be combined, and maybe the best option is to take the exit after the next, which takes them to their destination in a reasonable amount of time and no toll road. Notice that effectively both of the initial options were bad in some way, and the actual solution was something else entirely. If you're interested in diving deeper into decisions like this, Robert Aumann got a Nobel Prize for his work including his Agreement Theorem which this paragraph oversimplifies.

But none of that will be solved by averaging out the angle of the steering wheel. Only when the “why” question is asked, and honestly answered, can the information be combined, followup questions asked, the options ranked, and a credible plan be chosen, ideally, but not necessarily, supported by all. As the barrier taught us, unanimity is less important than consistency, clarity, and reaching a decision in time.

You may think this kind of problem doesn’t come up in the real world. But ask any executive in a corporation about “why is x being done that way” about some obviously failed or failing initiative they are putting out. If you’re lucky enough to get a honest answer, you’ll inevitably hear things like “well, department X needed to push their core technology, department Y wanted to satisfy their big customer, department Z insisted we adopt such and such standard, and this is what could be done within these constraints. Of course, neither the customer, nor the core technology, or the standard benefit by the result of the failed initiative. This is the equivalent of compromising on the angle of the steering wheel. The result is dumping billions of dollars into a big pile, and setting it on fire, using the goodwill of your employees, your partners, and your customers as fuel. And this isn't limited to new product launches. Whenever two departments in a company are moving in two opposite, mutually contradictory directions, the "plan" being implicitly executed is a contradictory plan.

When the question being asked is “is everybody ok with this plan” rather than “does the thing we’re doing make sense” or “how do we maximise our chances of success”, the only way forward is to make these frankenstein decisions. I am not saying such products never succeed. What I’m saying is that and any success that might come is in spite of, not because of, this kind of decision making process. There may be an uncompromised core (get it? uncompromised?) that the deadweight didn’t manage to drag down, so the combined result succeeds anyway, and hopefully future iterations reduce the deadweight, to the extent possible. Which it might not be, if the wider ecosystem is now dependent on such misfeatures.

As an aside, I actually believe this is why startups on occasion win against large corporations. A complex problem presents itself, and the startups are both more desperate to solve it, and less likely to make frankenstein decisions, not due to not being inherently vulnerable to politics, but because they have fewer commitments and fewer people, and therefore fewer temptations to compromise their vision. As soon as the startups have something to lose, compromises start being made, and the cycle starts all over again. Either the engine, designed in the age of purity, keeps working, the leadership is credible enough to push forward a clear vision for as long as possible, or the company falls back to maintaining their income stream by obvious, incremental, defensive moves, for as long as such can be sustained.

So what’s the answer? How should a team or company (nevermind country) run? My personal answer is something I have not quite seen discussed or described very much, though I suspect it drives some of the largest companies, while getting missed as a pattern. It is “transparent decision making with strong leadership”. Again, something that startups do naturally, and stop doing as they grow. The first part is all about packing as much information as possible into the decision. The second part is about making sure that the solution chosen has strong internal consistency. Keeping the decision-making process as a black box means you risk not maximising the stimuli, and therefore missing options you would have chosen had you been aware of them. Making the decisions a matter of consensus in a “flat” organisation means crashing into the barrier. Unfortunately this middle road doesn’t seem to be very popular, as the authoritarians will opt for the black-box “respect mah authoritah” approach that validates their superiority, whereas the egalitarians will opt for making sure everyone has their say, and as few people as possible are unhappy about any decision.

See it as a compromise, if you like irony. Holding the middle is not easy, but if we focus on the actual results rather than people’s feelings, we will sacrifice both the decision-makers’ ego, and the contributors’ ego, in exchange for maximising the area considered, while maximising the clarity of the output. And a team that understands the rationale of decisions made, and can observe the results as well, is more likely to learn collectively, and more likely to make better decisions together in the future. What’s more, people’s feelings end up better off in the long run, as there is no better cure for all ills than success. And a team that succeeds because of decisions they all understand and contributed to, is a team that grows and stays together.

I did not realise this when we were designing the internal process at, or when I started writing this blogpost, but our decision making and overall operating model might as well have been designed with this essay in mind. Since it fits so well as a followup, I will consider, but can’t promise, writing a next post, or a few, describing how we collect information and make decisions at, since I do enjoy putting forward concrete plans in the place of vague ideas.

I hope you have enjoyed this essay and look forward to hearing your thoughts

Feature Interference

While working on Etcher with Juanchi we came across a fantastic example of feature interference: two features that should be completely unrelated, somehow interfering with each other in a way that required extra work to make them coexist.

I find feature interference as a phenomenon particularly interesting, as it feels like a clue in solving the mystery that's called "what's wrong with software development?". It gets to the heart of a lot of misunderstandings between business and engineering, broadly speaking, as the complication completely blindsides the business side, creating aggravation, perhaps even suspicion.

Before we get deeper, let's dissect our little example to lay a better foundation of what we're talking about. Etcher has two features that in principle should not interact with each other:

  • Support for compressed images
  • Detecting if the drive is too small for the image

Each feature is fairly straightforward to build on its own, with no architectural changes needed. A project manager's dream where time goes in, feature comes out, users are happy, and everyone can go home and sleep well at night.

However, in combination, the two features clash. If we're opening a compressed file, then the size of that file does not reflect what we'll be writing to the drive. As such, we need a way to determine the uncompressed size of the image. Etcher, as it turns out, understands multiple compression formats. Some have a straightforward way of finding out the uncompressed size, while others (I'm looking at you bz2) require dedicated work to yield their secrets. In fact, the naive approach takes something 40 seconds and pegs a modern the CPU at 100% to find that size. Unless of course we want to decompress the image and just see the size of the file, but then we'd need to handle the case where the user doesn't have that much space available on the drive... you get the point.

We've gone from having two straightforward features to implement, to making complex tradeoffs about whether to peg the user's cpu to 100% for a decent amount of time, or expect them to have several gigabytes free on their harddrive. No fun.

And this is just two features. A modern program has dozens of features, each potentially interacting with all the others, causing bugs, instability, slowness, maintenance difficulty, and so much more. Two features can form one pathological set, but 10 features can potentially form over a thousand pathological feature combinations. We call that combinatorial explosion, and it's never a good thing. Feature interference is a core reason why projects slow down as they grow. It doesn't make intuitive sense, but the computer doesn't much care for our intuitive sense anyway.

How do we solve it then? I don't actually think anyone has a great answer at this point, though I'm happy to be proven wrong. Pointing out and naming the problem is certainly a good first step, and I'm almost certain I'm not the first one to do that. As a pragmatical approach to managing the problem, all I can think of is to "build a wall". Try to limit the problem to a smaller area, to prevent it from infecting the rest of the program.

In this particular case, I would create a small library whose job would be to find the uncompressed size of a compressed file. That library could support any number of compression formats (and be extended at will without impacting anyone), and return good answers to anyone who asks. While it makes little difference in our toy example whether the code to find the uncompressed size lives in the part of the code that opens a file or in the part that decides whether the file can fit in the drive, on the long run it pays to create good interfaces that can be used by any other part of the code to solve the same problem.

While we wait for AI to come and write our software for us, architecting things well is something we'll need to keep doing. This is the sort of topic I expect to be thinking about for a while, so I'll revisit it here if any new insights pop up.

Thinking out loud: Automation vs Complication

In this post I want to do some stream-of-consciousness brainstorming on a question that has been in the back of my mind for a long time.

To put it in a very coarse way that I hope to gradually improve in this essay, there seems to be a contradiction in the fundamental pitch for automation.

On the one hand, it's pitched as a way to improve efficiency exponentially. When you have a process or operation that is manual, has way too many steps, way too much opportunity for human error, takes way too much valuable human time, then the correct way forward is to make or buy a tool to automate that process. The human operator should only input the essential information for the task, and the tool should do everything else, including checking the inputs for sanity. This pitch is repeated with variations by every single automation vendor, and every single internal tooling project. What's more, on the face of it it seems to check out.

On the other hand, every tool added to one's stack adds to complexity, to maintenance/integration burden, to training of new team members, and to opportunity for other kinds of error, where the abstractions are ill-understood. Every tool also has its own set of dependencies, which can become a single point of failure for your organisation now as well. But the most painful side of all this is the loss of flexibility to modify the process. A mature tool will offer certain amounts of internal "wiggle room" through configuration options, and through context-sensitivity. No tool however can offer infinite flexibility (otherwise it would be able to replace every other tool, unless of course that flexibility opened up the original set of problems around human error and inneficiency we were trying to resolve).

Organisations adopt automation to improve their efficiency, yet end up inefficient because the tools introduce a different kind of friction, some of it around using, integrating and maintaining the tools themselves, and a lot of it around not being able to operate in the optimal way for the organisation due to limitations introduced by the tools. At some point it starts to feel like automation is a masked form of the quote:

"All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections." - David Wheeler

This seemingly fundamental contradiction in the generic pitch for automation is what I want to explore, to understand whether it applies only to certain tools or kind of tools. If not, perhaps it indicates that there is a "sweet spot" for automation. If for instance 20% of the tooling provides 80% of the efficiency gains, then perhaps that 20% is worth the maintenance overhead, but for the rest of the inefficiency, which may seem like fertile ground for automation, the standard advice to automate is actually wrong. The two solutions (and any other ones that may come up along the way) may of course be combined into more complex strategies for any given organisation, but we'll see what happens when we get there.

Let's begin with a fairly simple example however. Let's imagine a completely informal business (where everything lives in the mind of the owner, no real records kept), for instance a small grocery store. Let's imagine our grocery store is transitioning to a paper-based system, where transactions are recorded. Conventional wisdom suggests that this transition will be an unmitigated positive step forward, once the transition pain is dealt with. Besides the potential efficiency and correctness improvements, it will allow the organisation to adopt a new level of scale. The transition from informal to paper-based formal allows more people to be involved in the business (at the cost of having to write things down for every transaction).

Ignoring the added friction however, it's important to understand the flexibility cost that the business incurs. When the transactions have to be recorded, for instance, the owner loses the opportunity to allow certain people to buy "on credit". Let's say that he manages to add some extra structure for amounts received or amounts promised into the system, he then has to make sure to clarify who in the business can make the decision to accept an "on credit" purchase. To do that, he may also have to determine some rules for which customers are trusted with those transactions, and as the rules have to be defined somewhat generically, they will end up being wrong in some cases (or worse, gamed) which will make people unhappy. That unhappiness of course can be managed through some escalation process whereby the cashier can call up the owner to speak to a certain customer, and maybe that leads to the rules being modified after a lot of thought etc. When the rules are modified, every cashier has to re-learn them, make sure the change is understood and applied correctly, etc etc.

It's important to understand here, that none of these problems existed when the owner had everything in their head. They could make up or modify the process and the rules as they went, and perhaps an apprentice could pick up how this worked by watching the boss. The rules would evolve much more quickly and reach a better place faster, because changing them is as close to frictionless as it gets. This kind of calculus comes into effect when many tools are introduced, and to some degree explains why bureaucracies and large corporations can misfire spectacularly and make decisions no sane person would make. They are not a person, they don't have the ability to learn and adapt at a moment's notice. Outdated rules, tools, and policies remain in use for sometimes decades longer than they should, because other systems and people and data depend on them, and change is very expensive and hard to justify before a catastrophic failure.

There may be an interesting class of tool that we can exempt from the above effects: stateless tools. Let's take the humble calculator. It can be integrated into an informal process, and to a paper-based system, with seemingly little drawback, as it can be circumvented or used as needed. If we ignore the fact that the calculator presupposes knowledge of and compliance with basic arithmetic, it can be considered a simple, almost-free tool to adopt. The properties that make it so are that it is stateless and therefore does not lock information inside it, which in turn makes it easy to circumvent and also incrementally adopt. It is conceptually well-generalised with well-understood use cases, ans as a technology has matured into combining low cost with high performance. It is also generative: it can be used for things its inventors never expected it to be used for, and this is a characteristic shared with tools such as the telephone, the internet, and the personal computer.

The calculator stands as an example of a tool that imposes minimal constraints on its user, offering benefit with any drawback being hard to support in context. It of course did not start this way: the first calculators were ridiculously expensive, occupied entire rooms, and were extremely hard to operate, nevermind operate reliably. But we've reached the point where none of those things are true. The solar calculator doesn't even require a battery!

There is also another class of tools that fail time and time again. These are tools with hidden side-effects that are discovered while the tool is being adopted, that are not obvious at the time they are being proposed. These tools are ones whose abstractions ignore or silence barriers that the user should be aware of. We have for instance produced several platforms and standards suites that promise to shield the individual programmer from the burden of thinking about distributed systems as such, instead allowing them to be manipulated as if they were centralised. From the old RPC protocols, to CORBA, XML-RPC, SOAP, and the 57 Web Services protocols, we've seen this system fail every time. The current generation of cloud computing seems to be accomplishing parts of that vision, though with edge cases, but it seems like physics is reasserting itself when it comes to the internet of things. The repeated failures to create tools that cross certain abstraction barriers teaches us that there are configuration and management steps that do appear to be trivial and automatable, but lead to failures that are as hard to foresee as they are inevitable.

So we've discussed three types of tools so far: some that are obvious wins, some that imply subtle tradeoffs, and some that are doomed to fail, and often take their users with them. While this essay hasn't made any huge steps forward, and certainly won't stand any serious scrutiny for completeness or rigour, it does give me the outline to think about the automation problem as not an unmitigated good, but a case for understanding the tool that's being introduced, weighing its impact, and making a conscious atempt to go for it or not.

The Software Development Poverty Trap

According to wikipedia,

A poverty trap is "any self-reinforcing mechanism which causes poverty to persist." If it persists from generation to generation, the trap begins to reinforce itself if steps are not taken to break the cycle.

Think of a poor person that can't afford even to buy in bulk from the supermarket, and is therefore forced to buy things on an as-needed basis from the cornershop, more expensively. Perhaps an ATM that does not charge for withdrawals is far enough that they would need to take a bus to get to it. In terms of time, think of all the things you pay for to avoid wasting your time on. A poor person does not have that option.

A key characteristic here is investment. A poor person has no available funds to invest in things that would pay back their investment several times over, even in the short run. Investing in anything but the absolutely necessary short-term needs is simply not an option.

You can see how this situation might perpetuate itself, since lack of investment on day one leads to lack of investment on day two, etc. This applies to people as well as families, communities and entire countries.

I think the self-perpetuating cycle of poverty applies to software teams as well. An overloaded team cannot afford to refactor, learn a new technology or tool, spend time architecting things correctly, review security practices, or abstract things into new libraries etc. The only thing that the team can do is struggle to complete today's items, exhausting even their internal reserves of stamina. Without being able to pause, reflect, reconsider, and improve, the only way is down. This is not just about technical debt. It's also about exploiting opportunities for shortcuts that require some upfront cost.

It also strikes me that "Agile" and "Scrum" as practiced in large companies today is a fantastic way to get into the software development poverty trap. Thinking of nothing else than the completion of the current goals as set by the business stakeholders means that every next "sprint" will make things worse and worse.

I've always been fond of what I call "meta-work", work that is at a higher level of abstraction and saves a lot of work on the object level. Considering the software development poverty trap, it becomes more obvious that at almost all times, a significant part of a team's energy should be getting reinvested in improving the team's productivity. A team that does this will not only have a better experience at work, but will deliver more, deliver better, and have the results be more useful over time. What may seem like a waste on the micro level is the difference between success and failure on the macro level.

If you're managing a software team, consider postponing some work, or taking some less work on, or even, if possible, hiring some people with a mandate of delivering improvements to the rest of the team itself rather than to the customer. Some times the improvements are so drastic that they can pay off within a single period. You may just need to encourage your team to think in that direction. A team that invests in its own productivity will deliver more software, faster.

Crucially, that team will also keep its best performers on board, rather than losing them to a better environment. And a team that cannot keep its best people (and attract more like them) is in the ultimate poverty trap, doomed to deliver only the poorest quality software, when they in fact do deliver something.

If you run a team or work in a team, I cannot encourage you more to stop and think how you can improve not only your customers' productivity, but your own. All too often we preach the gospel of software and automation but fail to apply it to our own way of work.

Elon Musk's "stepping stone" approach to explosive innovation

When discussing successes like the ones Elon has been involved in, it's obvious you have to get more than one things right. This article is not about his ridiculous ability to execute the most intricate plans, his incredible breadth of understanding everything from finance to physics and engineering, or the unfathomable emotional stamina required to pull through the simultaneous near-meltdown of two companies. Observing Elon's course, it's the perfect balance between mad ambition and austere pragmatism that strikes me as a frequently overlooked feature.

When Elon started out as an entrepreneur, he didn't attack the world's problems directly. He started out with zip2, bringing print publications online. A great mission, but not a world-changer. He struggled, failed a lot, but got that company to a good outcome that got him his first money. For his next venture, he used the cash he'd earned to jump-start, an online bank. When things brought him head-on with the PayPal team, he opted to merge and take on the CEO job rather than compete into the ground. Through a lot of chaos PayPal ended up in a good enough spot to be acquired for USD 1.5B, leaving Musk with about USD 180m. Another successful go-round, another level up.

What Elon is most known for is his next act: starting two blindingly ambitious companies (three if you count SolarCity), leading them simultaneously, and pulling them through the financial collapse of 2008, to an unprecedented combined success. SpaceX plans to make a sustainable human colony on Mars, making life multi-planetary, and Tesla plans to make electric cars the norm, making a dent in our carbon emmission. Up to this point, Elon works iteratively: choose a venture, succeed, cash out, reinvest in the next venture. Examining the path of SpaceX and Tesla however, Elon is demonstrating a few new tricks.

This is my quick and dirty diagram of Elon's ventures to date:

What is striking is how similar the paths of both Tesla and SpaceX are, when the product names are stripped out. Both companies cut their teeth with a non-long-term viable demonstrator model (shown in purple in the diagram). For Tesla this was the Roadster, for SpaceX it was the Falcon 1. The path to getting both of these successful was littered with failures, disappointment, and strife. In the case of Tesla, there was a fired CEO and lawsuit, cost and time overruns, and general chaos. In the case of SpaceX, what more can be said other than "three exploded rockets". Unbelievably, they both came to a good result, with praise from most every observer. When you take on the impossible, it turns out people can be quite forgiving of spectacular failures.

Once the initial project was successful, they both used the know-how earned to produce their first real moneymaker (light green). For Tesla, this was the Model S, for SpaceX it was Falcon 9. These products had enough legs to allow the companies to build real businesses.

Many CEOs would take this early success and iterate to a very profitable business. Not Musk. He used this revenue stream to build further infrastructure (orange), making the products even more desirable. Tesla built its own dealerships, a supercharger network that spans the world, and the world's largest battery factory, the Gigafactory which is supposed to be only the first of many to come. SpaceX didn't need as much in terms of consumer services due to its target market, but it did focus on building out the human-rated Dragon V2 capsule, its own Spaceport in Texas and brought production for many parts in-house, strengthening single points of failure and compounding its economies of scale.

In addition to digging in and widening their lead from any competition, both companies got into additional demand-generating businesses (yellow). Tesla started the Tesla Energy product line, providing batteries for home and commercial use, therefore increasing demand for the outputs of the Gigafactory. SpaceX has started working with Google on a global satellite internet play, which is said to require 4,000 satellites to complete. Needless to say, they will fill up quite a few SpaceX rockets.

The truly phenomenal piece of strategic genius is the one we're getting to. Both companies, in parallel to building their first money makers AND deepening their hold on their market, have been using their assets to develop a technology breakthrough that makes their products orders of magnitude better and renders any competition moot (pink). For SpaceX this has been reusable rockets, the holy grail of rocketry. Few readers of this article are not aware that SpaceX did what was considFered impossible and landed a real rocket back on its launch site after deploying 11 satellites to orbit. The impact on the cost of launching something to space is hard to overstate. 95% of the cost of a launch is in discarding the rocket in the ocean. If a rocket can be re-used 10-20 times, getting to space has gotten 10 times cheaper, with maybe even 100x being within
reach. For Tesla, this has been the Autopilot feature. While Google has been working on a self-driving car for quite a few years, Tesla is now rapidly catching up by using its fleet of cars already sold as data collectors, and the behaviour of their drivers as training data. With expert use of over-the-air updates, Tesla can keep improving its algorithms in the wild, and at scale. It is now possible that long before Google can produce a car, Tesla will have mastered self-driving. The impact of self-driving technology is hard to overstate, but it is a massive multiplier for the value of a company like Tesla that has come from nowhere to almost having the best sales experience (own dealerships), the best hardware (Model S/X), the best software (Autopilot), and the best supply line (Gigafactory) of any car company in the world. It is hard to imagine how any car company will be able to compete on all these fronts once Tesla's technology matures. What's particularly interesting with these "secret weapon" technologies, is that they have not been developed in some cut-off R&D department, at least not for long. As soon as practically possible, they have been brought into the main line of operations, and developed in tandem with the core product. It's hard to understate how different this is to the traditional approach, and the impact it has is visible in the breakthroughs already made.

While all this is happenning, almost as a footnote, both companies have also perfected the next generation of their moneymakers (darker green): Tesla produced the Model X, and SpaceX will debut the Falcon Heavy in 2016.

Both companies can now use their well-developed assets to produce their masterpieces (blue). For Tesla this is the Model 3, an electric car that will cost about USD 35k, changing what "car" means for people who can't spend USD 70k-140k on a car. Elon has even inadvertently hinted ad a future where Tesla self-driving cars offer an Uber-like service. For SpaceX the masterpiece is the Mars Colonial Transporter, a rocket 10 meters in diameter, up from the Falcon 9's 3.7m. Its name may give you a hint of the ambition behind it.

The striking feature that is consistently present in all these ventures is that of incremental progress. Elon would not be able to build PayPal without zip2's success, and he would not have been able to build SpaceX, Tesla, and SolarCity without PayPal. Also, SpaceX and Tesla's missions have both been so ambitious that directly attacking them would be certain suicide. Elon in both cases identified incremental (though still massive) goals that were both commercially viable, while also being solid stepping stones to completing the ultimate mission. Once a stepping stone is in place, many new options open up, and both these companies have consistently picked up the best ones, building further grounding for the next step and so on. You may not plan to literally solve all the world's problems, but we can all learn a few lessons from Elon: Aim high, but focus on the "how".

If we learn one thing from Elon's way of picking his next moves, it's that there is near-limitless opportunity for success when blinding ambition meets ruthless pragmatism.

Investor Pitching as an Optimization Problem

For founders coming from technical backgrounds, making a deck often feels like a frustrating and even somewhat dishonest process. “How can they know what I mean if I don't explain the details of the algorithmic choices we made?”, “Why do I have to create bitesize zingers rather than having them spend some time to be carried away with the depth of our breathtaking vision?”. It comes down to attention span, or, shall we call it, bandwidth.

As the founder of your company, you are the world's foremost expert in every facet of it. When making a deck, you need to understand you are subject to the curse of expertise. Counterintuitively, research shows that your ability to explain what you do to a novice is worse than that of someone with less knowledge. To beat this obstacle, step #1 is to recognise you are responsible not for what you say (which makes sense to you) but for what is understood on the other end.

Put yourself in the shoes of an investor. One who is actually smart, has a strong track record in your market, and decent amounts of money to invest. In other words, the kind of investor you want looking at your company. But this highly sought-after investor is, almost by definition, very busy. They are overcome with proposals on where to invest, and what's more, those pitches are often nonsensical, impractical, exaggerated, not ambitious enough, or a combination thereof. They may be seeing 20 decks a day, and taking 3-4 meetings. This investor needs to focus their limited bandwidth on ideas that they are convinced are none of those things, and the default answer is “no”. So, depending on the context, you have anywhere from one sentence to 60 minutes to convince said investor to spend more time understanding your investment proposition. In a sense, before an investor invests their money, they have to be convinced to invest their time. And while they may have vast amounts of money to invest, their time is limited like everyone else's, so in a sense, even more valuable.

So you have this bandwidth envelope, within which your pitch must fit. Your goal is to use this opportunity to get more time, while at the same time everything you say must be consistent after they (hopefully) look deeper. The key here is to think about this as an optimisation problem. When you do, a number of things become apparent.

Extreme parsimony

Every idea, image, sentence, word has a cost. If it's not earning it's keep, it needs to go. If if might lead to a non-productive conversation, it needs to go. Even if it leads to a good but not great conversation, it needs to go. You need your absolute, all-star, dream team of ideas, and anything that doesn't meet that definition doesn't need to be in there.

Synergistic ideas

A key word here is "team". Every element must work together, reinforcing each other. The investor needs to leave the meeting thinking: "This is a strong team, taking a revolutionary but feasible approach to a massive market. They are poised to succeed, and my investment will help them get there, but if I don't invest, someone else will". This is the thought that needs to be in their mind when the meeting ends, but they won't just believe you if you simply come out and say it. You need to accomplish that with indirect means, namely, your deck and presentation.

Avoid distractions

A corollary to the previous piece of advice is to avoid anything that might spark a non-useful question. If your metaphor is to a company that recently had some bad news, consider using another company. If some of your team photos look weird, use other photos. Make sure every thought and discussion is directed towards your ultimate goal, and not some side-tracking conversation, even if that conversation is pleasant.

Budget your innovation tokens carefully

Every new company has to have a certain (small) number of innovations they've gotten right before anyone else in the world. In everything else, it's best not to appear "weird" or postulate additional innovations. Innovation is a risky thing to get right, and while one or two can make you a promising startup, needing five or six to go right can quickly make potential investors run away (and rightly so). If your team's core strength is in technology, go with the most boring business model you can. If your team's advantage is in user experience and customer development, use boring backend technology. This advice is both important in terms of pitching, as it avoids distractions, but also in terms of planning out your company itself.

Focus on impact

While you may not be able to go into the depths of your technological breakthroughs, you can focus on the value they bring. That's why you're doing the work anyway (right?). If you're lucky, you can explain your breakthrough as an improvement over the competition. Things like "10x faster", "3x cheaper", "5x less development needed" etc., are good things to be able to say (assuming, of course, you can defend them when asked). Even better, is to phrase things in terms of the qualitative threshold they're able to jump over. A masterpiece of this approach is Apple's branding of their "Retina" screens. While they did brag about making their pixels a lot denser than the competition, they focused their message on being the only ones under the perceptive limit of the human eye. Being somewhat better than the competition is one thing, being the only ones in the market with an important new kind of thing, is entirely another.

Lossy compression

A crafting a concise message must entail compression, and at the extreme ends of the specrtum, the compression will necessarily be lossy -- not everything about your company will fit in the deck. Beyond culling material, you may even need to compromise on clarity. A great reduction in surface area is worth a small reduction in accuracy.

If you think this sounds dishonest, consider the billiard ball model of physics. While not strictly correct, it is still to this day used to explain particle physics to students and laypeople, and does the work remarkably well. If someone wants to dig in they will discover the limitations of the original explanation, but they will also understand why that explanation was a necessary way in. Speaking of compression, you can think of your pitch as a progressive encoding: Your one-line pitch can be a very blurry (but exciting!) picture of what you want to say, your one-paragraph pitch may be a much clearer but still considerably blurry, etc. Your deck and presentation are the next steps of that process, with due diligence perhaps exposing the next level of detail.

Milk context for all it's worth

If we're thinking in terms of optimisation, the way to avoid sending something big over the wire is to send a delta from a pre-existing artifact instead. Use things that are pre-installed in your listener's mind. This is why "We're the X of Y" type elevator pitches actually make sense. The listener can transpose the known pattern (X) to the known context (Y) and make a snap decision on whether they want to hear more. Metaphors and recent events that are somehow related to your product are also great artifacts you can use to your advantage.

Don't waste time giving your opinion

Starting a conversation with an investor, you have very little credibility. Remember, they see and hear people like you multiple times a day, and most of what they hear is bullshit. They know full well that you are motivated to say anything that will get you to the next round. Building your own credibility is also a goal of the pitch process, but going in, it's best to think you have none. When you choose how to describe things, always default to descriptions that can be verified independently. For instance, instead of saying a customer is "a big company with a massive revenue stream" say "a 500 person company with $3b in annual revenue". This bypasses the issue of what you mean by "big" or "massive" and allows the listener to make up their own mind, focusing on numbers that they could, in principle, verify for themselves. This is also why concrete endorsements by relevant experts or companies, revenue by customers, letters of intent, and other such artifacts are immensely valuable. They can help support parts of the argument that would not come across strongly enough if you didn't have any external support to show for them.

Leave some meat on the bone

You don’t have to cover every topic in the main body of your pitch. In fact, it may be a good (but risky) idea to leave some topics for the investor to inquire about. These are topics you have strong answers to, so you can convert a “is this the weak spot?” thought to “shit these guys have done their homework, this may be worth paying attention to” thought.

Many birds, one stone

Since the environment is so constrained, there is simply no time to work on one side of the problem at a time. Every slide needs to reinforce other parts of the pitch as well as its main payload. Your customer stories can also reinforce your pricing model, or your team’s quality. Your presentation as a whole also needs to reinforce how good you are at communicating. Take every opportunity to strengthen multiple parts of your story simultaneously.

Optimise the input

Given a specific state of the company, there is only so much you can do to pitch it well before you start becoming dishonest. The good news is, besides Deck-maker-in-chief, you are also in a position to influence the company itself. On every previous piece of advice on this list, consider whether the things you are having difficulty expressing really need to be as they are. As a simple example, you can formalise things so you can make stronger statements about them. Instead of saying a customer has "a strong interest" in using your product, ask them to sign an LOI or give you some money instead, and say that. Sometimes, making a deck is good impetus to actually do some things you should do anyway, like cleaning up the structure of your company, your pricing, your roadmap, your product lineup, your team composition or anything else that you would have to explain.

In general, VCs, especially Silicon Valley VCs, expect a company to be in a specific form to be "investable" and that shape correlates with previous successes they've seen. Any deviation from that model has to be explained, and that is time wasted, unless that explanation serves to reinforce other parts of your pitch therefore making you come out stronger. For every odd-looking thing you have to explain, by default you lose points. In general, a company that makes sense is easier to make a deck for than vice versa (but the causal arrow is not reversible). So make your company make sense from the get-go rather than wasting time putting lipstick on a pig.

Use your unfair time advantage

While the investor only has a few minutes of attention to spare, you have much more time on your hands. Compression algorithms are known to produce better results if they are given more time to compress things, and this applies even more to your pitch. Mart twain famously said: "I didn't have time to write a short letter, so I wrote a long one instead". You have a lot of time, make a short deck.

Beyond fundraising

The good news is that a condensed value proposition is good for you and your company too. Selling your company is what you do to prospective employees, partners, and customers too. Each of them is coming from a different place, but the same principles of compression apply.

This is not (just) about investors. In fact, one of the best things about an investment round is being forced to boil down your value proposition and plan for getting there, and getting rapid rounds of feedback from motivated smart people. It's not often cited as such, but this forced clarity of thought may be one of the big success factors for companies in Silicon Valley.