It seems like investors are especially obsessed with the psychology of decision making — high stakes, after all — but all kinds of decisions, whether in life or business — like dating, product management, what to eat or watch on Netflix — are an “investment portfolio” of decisions… even if you sometimes feel like you’re making one big decision at a time (like, say, marriage or what product to develop next or who to hire).

Obviously, not all decisions are equal; in fact, sometimes we don’t even have to spend any time deciding. So how do we know which decisions to apply a robust decision process too, which ones not to? What are the strategies, mindsets, tools to help us decide? How can we operationalize a good decision process and decision hygiene into our teams and organizations? After all, we’re tribal creatures — our opinions are infectious (for better and for worse) — so how do we convey vs. convince, and not necessarily agree but inform to decide? Especially given common pitfalls (resulting, hindsight bias, etc.), and “the paradox of experience”, including even (and more so) winning vs. losing.

Decision expert (and leading poker player) Annie Duke comes back on the a16z Podcast — after our first conversation with her for Thinking in Bets, which focused mainly on WHY our decision making gets so frustrated — to talk about her new book, which picks up where the last left off, on HOW to Decide: Simple Tools for Better Choices. In conversation with a16z managing partner Jeff Jordan (and former CEO of OpenTable and former GM of eBay among other things) — so, from all sides of investing, operating, life — Annie shares tips for decision makers of all kinds making decisions under uncertainty… really, all of us.

Show Notes

  • Cognitive biases that affect decision-making [3:05] and tools for overcoming them [6:08]
  • Disadvantages of the traditional “pros vs. cons” list [11:44]
  • How long decisions should take [14:43], and how decision “hygiene” can streamline the process [20:14]
  • Making decisions within groups [24:47] and shortening feedback loops [31:14]
  • Considering optionality reversibility [36:08] and hedging bets through decision-stacking [40:31]

Transcript

Hi, everyone. Welcome to the “a16z Podcast.” I’m Sonal. Today, we have another one of our early book launch episodes for a new book coming out next week, by expert decision strategist — and leading World Series of Poker player — Annie Duke. You can catch the podcast we did with her a couple years ago, for the paperback release of her first book, “Thinking in Bets.” That episode was with me and Mark interviewing Annie, and was titled “Innovating in Bets” — as is perhaps also one of the signature themes of this podcast. But in this episode, we talk about her new book, “How to Decide,” which picks up where the last book left off. And the discussion that follows covers lots of useful strategies, tools, and mindsets for helping all kinds of people and organizations decide under conditions of uncertainty.

Annie is interviewed by a16z managing partner, Jeff Jordan, who was previously CEO and then executive chairman of OpenTable, former GM of eBay North America, and much more. They begin by quickly covering common pitfalls in decision-making, then share specific tools not to do and to do, including how to operationalize good decision hygiene into teams. And when to spend time deciding or not, especially when not all decisions are equal and some may seem bigger and more impactful — whether it’s investing in life decisions, like getting married, or business decisions, such as what product to invest in, or what strategy to pursue, or what market or what investment. As a reminder, none of the following is investment advice, nor is it a solicitation for investment in any of our funds. Please be sure to read a16z.com/disclosures for more important information.

Jeff: So, Annie, as the author of one of my favorite books, what motivated you to do a sequel — your new book, “How to Decide: Simple Tools for Making Better Choices”? “How to Decide”?

Annie: So, when I think about what “Thinking in Bets” was about, it was really the way that our decision-making gets frustrated by this, kind of, discorrelation between decision quality and outcome quality. And then toward the end of that book, I was — kind of, a little bit of an exploration about how you might become a better decision maker given the uncertainty, but it was mostly a “why” book. And so, this is really trying to lay out for people — how do you actually create a really solid and high-quality decision process that’s going to do two things? One is, get a pretty good view on the luck, which you need to do. You need to be able to see it for what it is. Obviously, you can’t control it, but you can see it. 

But then the other thing, and I think this was something that was really fun — I got to really dig deep into this problem of hidden information, that when we’re making decisions, we just don’t know a lot, because we’re not omniscient. And we also aren’t time travelers. And so, I got to actually do this really deep exploration into how you might actually, really, improve the quality of the beliefs that you have, that are going to inform your decisions. Which was a topic I covered a tiny bit in “Thinking in Bets,” but here we do like a super deep dive.

Jeff: It is a super deep dive. And why I love your books is — it’s so germane to what we do in our day job, which is [to] make decisions under extreme uncertainty. So, to recap, [talk about] why trying to learn from experience can go sideways.

Common decision-making pitfalls

Annie: Sure. So, you know, both of my books, kind of, start a little bit in the same place and then they diverge from each other. But I think that’s because it’s the most important place to start. What I talk about, at the beginning of this book, is what I call the paradox of experience — which is, obviously — experience is necessary in order to become a better decision maker. You do stuff, the world gives you feedback. You do more stuff, the world gives you feedback. Hopefully, along the way, you’re becoming a better decision maker, given that feedback.

The problem is that any individual experience that we might have can actually frustrate that process. We can learn some pretty bad lessons when we take any individual piece of feedback that we might get. So experience, while necessary for learning, also is one of the main ingredients that makes us worse decision makers. And it really just, kind of, comes from this problem, that in the aggregate, if you get 10,000 coin flips, we can say something spectacular about the quality of our decisions and what we should learn or not learn from them.

But that’s not the way that our brains process information. Our brains process information sequentially, one at a time. And because we’re, sort of, getting these outcomes one at a time and we’re just taking really big lessons from something that’s really just one data point. And the two main ways that that frustration occurs is because of resulting, which obviously I cover quite a bit in “Thinking in Bets,” where we use the quality of an outcome to derive the quality of the underlying decision. You can run red lights and get through just fine. And you can run green lights and get in accidents. So, these things actually are correlated at one, but with resulting, we act like they are. And then the other problem is hindsight bias. We aren’t really good at, sort of, reconstructing our state of knowledge at the time that we made a decision. And so, once we know the outcome, you not only kind of view that as inevitable, but we’ll also, sort of, think we knew that that outcome was going to occur — none of which are true. So, those two things combined are really problematic.

Jeff: You had this beautiful imagery of decision forestry, which resonated with me.

Annie: I sort of think of them as cognitive illusions. What those illusions are creating for us is the idea to say — it’s the only outcome that could have occurred. In reality, though, what we know is that, at the moment that we make a decision, there’s all sorts of different ways that the future could turn out. When we’re at the moment of a decision, we can see all those branches of the tree, where I become a fireperson, or I become a poker player, or an academic, or whatever, you know — sort of, imagining all the different ways that the future could unfold. But then after the future unfolds as it does, we take a cognitive chainsaw to that whole tree, and we just start to lop off the branches that we happen not to observe. In other words, we, sort of, forget about all the counterfactual worlds. And we end up thinking that there was only this — only this one branch that could have happened, because we sort of chainsawed everything else away. We sort of forget that there were other paths that could have occurred.

Jeff: How do you keep the forestry from lopping off the branches? As you started turning to how, you started with some really useful tools.

Tools for analyzing options

Annie: So, there’s two tools that you could do when you’re thinking about that, actually three. The first has to do with trying to reconstruct what the actual state of knowledge that you were in. When you think about — what did I know beforehand, and what did I know afterwards, you can now start to sort of reasonably see — what was the information that was informing the decision at the time? When you actually go through this process, you’ll spot, “No, wait a minute, that was something that revealed itself after the fact.” That’s one thing that can be very helpful.

Another thing that can be very helpful is to actually go through this process of thinking about this two-by-two matrix of the relationship between decision quality and outcome quality. So, there’s a quadrant, which is — good decision, good outcome, which you can think of as like an earned reward. Good decision, bad outcome — that would be bad luck. Bad decision, good outcome — dumb luck. Bad decision, bad outcome — I guess would be, like, your just desserts. When you’re thinking about any outcome that you’ve had in your life, if you do that over time, what you’re going to see is that you’re going to have certain patterns about which quadrant you’re really filling in a lot. So, if you’re seeing that you’re really only putting things into, like — good, good, bad, bad — you need to start seeing how luck is influencing you.

And then the other thing you want to do is just start thinking about particularly the good, good quadrant. Because we are asymmetrically willing to go and try to find some luck in there. Let me explain what I mean. So, if you have a bad outcome, you already feel bad. You’re sad because you lost. And it’s, kind of, nice to go in and deconstruct that, and analyze process, and really look at the quality of the decision that led to that outcome. Because if you find some bad luck in there, you get a little relief.

Jeff: You, kind of, get off the hook.

Annie: Right. It’s like a door out of the room. Luck is giving me a way out of this. So, we’re actually pretty eager to go around and explore those bad outcomes. What we’re not so eager to do, though, is when we have a good outcome — to apply the exact same process. To actually spend some time in there thinking about, “Well, you know, what was luck or was there a better way?” And the reason why we don’t want to look at that is because we feel pretty good. If you find out that you won because of luck, that’s a door that you actually don’t want to have open for you. 

So, I actually put a lot of focus, when I’m thinking about using this tool, of really digging into that one quadrant. And what you can see is, in order to actually be thinking about which quadrant that fits in, you have to actually apply this other tool — which you can do in retrospect — which is actually to do some exploration of, like, what are the other things that could have happened? Because if you don’t understand those counterfactuals, it’s very hard to actually appropriately place any outcome into the right quadrant. So, I have tools in the book which will help you, sort of, reconstruct these things retroactively.

Jeff: It’s kind of interesting. The investment community often tries to capture the thinking at the time through the investment memo. Which then, you know, records, okay, you know — these are the potential outcomes that we can envision, here are the probabilities of the different outcomes. And in total, we’re willing to make this bet, even though there are some outcomes that are pretty unattractive, to say the least.

Annie: And that, absolutely — if you think about a knowledge tracker, that’s what you’re doing. It’s like you’re trying to reconstruct an investment memo. It’s better than nothing. But what you really, kind of, want to be doing is — doing this stuff prospectively. You want to have some sort of record of not only what you thought at the time, but also exactly what you said. Like, what are the ways that we think this could turn out? Like, what are the payoffs of each of those possibilities? How probable do we think those are? So, you can actually look at, generally, two things — what’s the expected value, what’s my downside risk. And then you can, obviously, compare options to each other. What I think is actually really important, though, about thinking about this, like, evidentiary record — that you’d like to create at the time of the decision, as opposed to [trying] to reconstruct, is that it’s not actually an extra step.

Like, people talk about decision journals, which feels like work. Because it feels like an extra step where you’ve done the decision, and now you’re trying to record everything. The fact is that a really great decision process is going to produce this evidentiary record naturally. And obviously we’d prefer to have that, because what the evidentiary record is giving you — what that investment memo is supposed to give you — is, sort of, what your expectations of the world are. Not just like, do I think I’m going to win or lose at this probability, but also, like, what do we think is going to be true with the world in general? What I find in my work is that when people lose, they’ll do these process dives. The problem is when there’s a big win, they’re like, “We won.”

Jeff: Yep, exactly. When an investment goes bad, you do spend time trying to say, “Okay. What can I learn? What can I do differently?” And then when it goes well, you just spike the ball in the end zone and do a dance.

Annie: And we really are just, like, spiking the ball. But there’s so much to be learned from the wins as well, and I would argue, actually, more. Particularly, by the way, when the <inaudible>, it’s like, there’s going to be — in a lot of ways, there’s more to be learned from the wins than the losses, right? Because the thing is, like, you know, you can win for all sorts of reasons that you didn’t expect. And yet we spend a lot more time in our decision process, exploring the losses that were for reasons that we expected — than the wins that might’ve been reasons that were unexpected. Maybe we could have cleaned up the process, or there was information that we’re missing that we could have applied, so on and so forth. We’re, kind of, losing a lot of the learning time. We’re not being very efficient when we do that. And the other problem with that is actually that that has downstream effects that are quite bad. I’m going to do things that are very consensus. So, I’m going to want everybody to agree with me.

Jeff: Yeah. That resonates a lot. So, you take on using a pro-con sheet. And it was funny. I was cleaning up offices a couple years ago, and I found sheets in different places, and aggregated my career decisions. And, you know, I came to the conclusion that they were pretty much worthless. And so, you come to the same conclusion in the book. Why are pro-con sheets worthless?

Annie: So, let me just say, a pro-con list is actually a decision tool. And if you have a choice between that and nothing, I think a pro-con list is very slightly better than nothing. But here are the problems with the pro-con list. The first is that it’s flat. It lacks any dimension. It’s like a side-by-side list — here are the pros, here are the cons. And I don’t really understand how you would weigh one side against the other without adding some dimension to that list. And that dimension would be two things. One is, how bad? What’s the magnitude? The other dimension that’s missing, which is terrible, is probability. 

So, in that sense, I’d rather just use the decision tree. And for an option that I’m considering, I want to just think about what are the reasonable possibilities, what are the payoffs for those, and what are the probabilities of those things occurring? And then I can add that dimension back in. Without that dimension, it’s not a great tool for comparing one option to another — because, again, I can’t calculate, like — any kind of weighted average here. If, like, I’m choosing between two colleges, is the one with more pros, like — am I supposed to not go there? I really, kind of, don’t know, because I don’t have this dimension.

And then the third problem, which I think is actually the most dire, is that what we’re really trying to do is to reduce the effect of cognitive bias. Pros and cons lists actually amplify all of that stuff. It’s, kind of, a tool of the inside view. And let me just say for people listening — I imagine some people saying, “No, when I go to make a pros and cons list, I haven’t decided yet.” I have news for you. The minute you start thinking about a problem, you’ve already started deciding. You know, regardless of whether you’ve made that explicit or not, you’ve already started to get yourself to a conclusion. And now when you go to do a pros and cons list, it’s just going to amplify the conclusion that you already want to get to. So, I think it’s just not a very good tool.

Jeff: My worst career decision, by a mile, was joining a company called real.com, right at the beginning of the internet era. It was being purchased by Hollywood Entertainment, which ran the Hollywood Video stores. And it was a bad decision. I <inaudible> in a year, I got scars. But when I went and saw the pros and cons, the pros were aspirational and the cons were delusional. I clearly had decided before I started the list.

Annie: Yes, exactly. When we start to use something that feels objective, like a pros and cons list, we get that feeling of like — well, now I can have confidence that it’s a really good decision. So, one of the things that I’m very wary of — is that I think that there’s certain things that can come into a decision process that feel like it’s certifying the process. So, we end up with this combo of a decision that isn’t really better, but that we feel is much more certified.

How long a decision should take

Jeff: I love the tools you described using the decision trees. The prospective gathering of information. Then you took your “how” into an interesting direction. And I really enjoyed the part on spending your decision time wisely. <Oh.> So, it’s a book about, you know, making great decisions — and then you start talking about all the decisions that you shouldn’t apply it to.

Annie: <laughter> I know. So, I spend the first six chapters really, kind of, laying out what a pretty robust decision process would look like. And then I, sort of, take a hard left and I say, okay — so now that you know, mostly you shouldn’t be doing that. Which I know sounds a little bit odd, but it’s this meta skill of understanding that obviously you can’t take infinite time to make decisions, because opportunities expire, and you’re losing the ability to do stuff in between. And so, we want to really think about what types of decisions merit taking time, and what types of decisions merit going fast. And it just turns out that most of the decisions that you’re going to make on a daily basis are ones that you should be going fast on — much faster than you actually do. And in some ways, I think that people, sort of, have it reversed.

Jeff: Throw out a couple examples, because that’s where it really came alive to me.

Annie: Okay. So, let me ask you this. What’s your guess — obviously pre-pandemic — what’s your guess on the average amount of time that an adult in America takes on — what to watch on Netflix, what to wear each day — I mean, at the moment it’s sweatpants, but, you know, we’ll ignore that — and what to eat?

Jeff: If you’re my mother-in-law, she used to spend a half hour every time we went to a restaurant.

Annie: So, like, she’s not even that much of an outlier. If you add it all up, over the course of a year, the average adult is spending between six and seven work weeks — like, literally, on just those three decisions. I’m sure she’s looking at the menu, and then it’s quizzing like all the waitstaff, and asking everybody else at the table what they’re going to order — like, try to go back to the chef, looking on Yelp. So, here’s my question for you. Let’s say that we ate a meal together, and you were trying to decide between two dishes. Like, what are two dishes that you would have a hard time deciding between?

Jeff: Fish and a good veggie stew.

Annie: Okay, okay. So, you’re trying to decide between those two things. If you’re [your] mother-in-law, you’re quizzing everybody. So, let’s imagine that you ordered the veggie stew and it came back. And let’s imagine you got this bad outcome, where the food was really yucky and you didn’t even finish it because it was so gross. So, now let’s imagine it’s a year later and I say, “Hey, Jeff, how are you feeling right now, happy or sad? So, you remember that horrible veggie stew you had a year ago, how much of an impact did it have on your happiness today?”

Jeff: Zero.

Annie: Zero. Okay. So, let’s imagine I catch up with you in a month and I say, “Hey, Jeff, feeling happy or sad right now? Do you remember that horrible veggie stew you had like a month ago? How much of an effect on your happiness did it have today?”

Jeff: None.

Annie: None. What if I catch you a week later, by the way?

Jeff: None. Now, if it had been the fish that had been bad, like, a week maybe…

Annie: Maybe, but not the veggie stew. <laughter> Okay. So, what I just walked through with you is something I call the happiness test. I use happiness, generally, as just a proxy for, are you reaching your goals? Because we’re generally happier when we’re reaching our goals. So, you can substitute any goal that you have in there. And this is a way for us to figure out how fast we could go. Because, basically, the shorter the amount of time in which your answer to the question is — “Did it affect your happiness at all?” — is no, the faster you can go. Why? Because there’s a tradeoff between time and accuracy.

So, in general — not always, but in general — the more time we take with a decision — and there’s more time for us to, like, map these things out, and actually calculate, like, expected values, and figure out what the volatility might be. Or gather information, get more data, all of those things. Generally with time, we should be increasing our accuracy. So that’s why we can speed up — I’m assuming no food poisoning here — that when we look at the worst of those outcomes, that it has no effect, it’s neither here nor there. Which means that — we can take on the risk of saying, “I’m going to spend less time, because I’m willing to risk the fact that I might increase the probability of the worst outcomes, because it doesn’t really matter to me.”

Jeff: Then you make another point — that you can repeat the decision, next day at the restaurant — and order the fish instead of the tasteless stew.

Annie: That’s the other thing that you can look at, which is — when you have these low-impact decisions that are quickly cycling, and they repeat very quickly — so that’s, like, what to watch on Netflix, what to wear, what to order at a restaurant — we should go really fast for two reasons. One is you’re going to get another crack at it in like four hours. And then the other is that — one of the things that we actually don’t know well, although we think we do — is, like, our own preferences. We’ve all had that experience of having a goal, achieving it, and realizing that wasn’t really what we wanted in the first place. And then there are certain types of decisions where it’s just really helpful to, sort of, get some feedback from the world.

So, when we can actually cycle these decisions really quickly — and I’m not really too worried about, like, making sure I’m making the best possible decision in terms of accuracy. What I’d rather do is get a lot of cracks — get a lot of at-bats — so that the world can start giving me information back more quickly, and I can start cycling that feedback a lot faster. Then I’m going to build much better models of the world. And what my own preferences are, and what my own goal goals are, and what my own values are, and what works and doesn’t work. Such that when I do actually make a decision that really matters, my models of the world are going to be more accurate — by having just, sort of, like, done a whole bunch of stuff really fast and not really cared whether I won or lost.

Practicing decision “hygiene”

Jeff: That makes perfect sense. Now, one of the chapters that I loved was decision hygiene. I found this book fascinating from the perspective — as both an investor and a former operator. I mean, an investor, it’s obvious — you’re making two or three investments a year. You’re seeing, you know, hundreds of companies. How do you decide? But as an operator, there are a few decisions you make each year that are super, super important. Particularly, the ones that I used to labor over was — okay, you have to commit. You have to invest your product resources — your most valuable asset, your engineers — into specific deliverables. You know, is it going to be A, B, or C? And that’s the most important decision I made all year, other than possibly people decisions. Explain a little bit — how you can maintain great hygiene. It resonated in both my professional experiences in a really significant way.

Annie: I have to say, like, the decision hygiene stuff — and the ideas of predicting these intermediating states of the world — apply so much in a startup environment. Because obviously, kind of, the nature of a startup is that you do have very little information, and you’re making pretty big bets on a future that, by definition, is going to be somewhat contrarian. So, making sure that you don’t get in this, kind of, group think. Like, I’m not saying don’t believe in yourself, of course — but this is actually a way to have more belief in yourself. Because the quality of the decisions that are going to come out of a good decision hygiene process are going to be so much better. And that becomes much more important in a situation where we are at a paucity of information. And then it starts to actually close feedback loops more quickly for you, which also increases the quality of your models and information. So, I actually can’t think of a place where this is more important than in a startup environment.

So, let me just start, kind of, the premise — why you need some decision hygiene. I don’t have control over luck. What I can do is, I can make decisions that reduce the probability of a bad outcome. You know, even if I make a decision that’s only going to have a bad outcome 5% of the time, I shall observe it 5% of the time. And luck is what is determining when I observe that bad outcome. So that’s kind of one side of the puzzle. The other side of the puzzle has to do with how you construct your decision process. What do you think your goals are? What do you think your options are? What do you think your resources are? What do you think those possibilities are for any given option you’re considering? What do you think the probabilities of those things occurring are?

Basically, your whole process is built on this foundation. Like, that whole house is sitting on top of a foundation — which is your beliefs. And by beliefs, I don’t mean things like religious beliefs. I mean, just like — what are your models of the world? How do you think the world operates? What are the facts that you have? What’s the knowledge that you have? And that foundation that that whole process is sitting on has two problems. One is that a bunch of the things we believe are inaccurate, so it’s like cracks in the foundation. And the other is that we don’t know very much. So, it’s like a flimsy foundation. The solutions to both problems are the same — which is that we need to start to explore that universe of stuff that we don’t know. That’s where we run into new information that helps us beef up our foundation. And it’s also where we happen to run into corrective information —  things that can correct the inaccuracies in the things that we believe.

The other thing that helps us, too — when we were talking before about the pros-and-cons list that gets you, kind of, caught in your own cognitive bias — is to realize that like a lot of the cure to those kinds of problems is to get other people’s perspectives. So, two people can be looking at the exact same data, and they can come to very different conclusions about the data. That’s what a market is. It’s different perspectives colliding. All right. So, having set that stage, one of the best things you can do for your decision-making is finding out what other people know and what their perspectives are on the problems that you’re considering. The problem is that without really good decision hygiene, you’re not actually going to be able to execute on that properly. So, let’s figure out — how do we get this into a team setting? Basically, human beings are very tribal, and we’d like to, sort of, agree with each other more than we actually do. And our opinions are really, actually, infectious. So, in order for you to know that you disagree with me, what is the thing that you need to know from me first?

Jeff: Well, what do you think?

Annie: Right, exactly. And this is where we get into this huge problem in interpersonal communication. When people ask for feedback, pretty much 100% of the time, they tell the person what they believe first. I’m thinking about a particular sales strategy or whatever. And I will lay out for you, not just the information that you need, but I also tell you my opinions on that.

Jeff: “Give me your unbiased opinion,” right. Now that I biased you hugely, right.

Annie: Right, exactly.

Jeff: So, the reason your decision hygiene point, maybe, was so interesting to me — is you called out one of my tools that I used as an operator, which was quarantining in group settings. I found, at OpenTable, that if I walked in and had, you know — put a strategic choice on the table, there was one-and-a-half people in there who would drive the discussion, and their opinion would always carry the day. <Yep.> So, I developed a tool where I would pre — on very important, big-time, strategic decisions, I would ask everyone to send me their lists of prioritizations. And then I would aggregate them, and then feed that back to the group — to heighten the contradictions, essentially. The quiet person who didn’t really want to put a contrary point of view, and spar with the other person. All of a sudden, the data is on the table because they quarantined the gathering of it. And then I found the conversation was so much better than just throwing it open and having, you know, the charismatic, loquacious, opinionated person, carry the day every time.

Annie: I could quarantine my opinion. But as soon as someone else talks, as you just so nicely put — it’s like everybody else is infected anyway. I’m just a really huge fan of pre-work. Figure out what it is that you’re trying to get feedback about, give everybody the same information. And then actually elicit those responses. Now, the more specific, the better. So, I like them to rate it, right? Give me on a scale of zero to five. Because then I can find out, like — Jeff is a four and, you know, Annie’s a two. And maybe Jeff and Annie need to have a conversation, because it turns out that there’s quite a bit of dispersion of opinion there. What this allows you to do is — first of all, it actually disciplines your decision process, because you have to think about what are the things that matter to this decision that I’m trying to elicit opinions about. And let me be clear, it’s not that I don’t think people should provide rationales. I think those are actually quite important. It’s just that they need to have something that’s much more precise. It’s like a point estimate, because I need to see where the dispersion is and then let them give the rationale.

Jeff: I used to give a hypothetical budget. You have a million dollars. Here are the 12 ideas you can invest behind. Deploy your budget. And each person would deploy. And then all of a sudden, you’ve got something that’s really powerful. And you’ve got — oh, you loved this idea, and you hated this idea. Let’s discuss the idea.

Annie: Right, which I love. Exactly. So, you can actually see that they disagree with each other, or see that they do agree with each other. It also makes you actually think about — what are the component parts of this decision that really matter? You can start to actually create for yourself — almost, like, a little bit like a checklist — but here are the things that we need to pay attention to, and that I actually need to get the feedback on. So, what Kahneman would call these are mediating judgments. You’re thinking about what are the mediating judgments for any broader category that you might be judging on. And that helps you to really discipline the decision process. You then bring that together in one doc, and you sort it into — here are areas of agreement, here are areas where there’s some dispersion.

People get to read that prior to coming into the meeting. So, they’ve actually seen, sort of, the full slate now, of what the different opinions are in the group. This does really great things for your meetings. It makes them much more efficient, much more productive. You’re not surfacing all that stuff in the room, which just takes a long time. <Absolutely.> And by the way, you’re not going to surface all of it anyway, so that’s bad. 

But the other thing is that now you can come in and you can say, here are areas where we generally agree — yay us — but let’s not talk so much about the fact that we agree. Which is what happens in a lot of meetings, where you’ll say something, Jeff, and then I’ll go, “I agree with Jeff, and let me tell you why.” And then somebody else is like, “Yes. And I have more color to add to that,” because everybody, sort of, wants credit for that idea. But we don’t care now, because we already found out we all agree.

Yay. Yes. Right. There was this round. Cool. Right? But now it turns out that Annie thinks the earth is flat over here. Okay. So, now what are we going to do? Jeff thinks the earth’s round, Annie thinks the earth’s flat. And that’s where you really want to be focusing your time — on places where there’s dispersion. And you want to focus that time in a way where it’s not about convincing anybody of your opinion. It’s about just informing the group. And then if anybody, sort of, agrees with you, I’ll say, “Hey, you know, Sonal, you also agree that the earth is round. Is there anything you want to add to that?” So, you’ll get to say your piece. And then, “Annie, you believe the earth is flat. Is there something you didn’t understand?” Now, notice, in no way is anybody saying, “You’re wrong,” or “You haven’t thought about it this way,” or whatever. It’s, I get to tell you, “Here’s something I don’t understand.” And then we, sort of, get to the point where I say, “Okay. Explain your position.”

There’s really amazing things that come out of that process. Thing number one is you get much more comfortable with the idea that everybody doesn’t have to agree. Number two is people have different mental models. And so, you get to expose everybody to those different perspectives, and the different facts people are bringing to the table. So, the whole group becomes more informed, which is awesome. The third thing is that the person who is conveying their position becomes more informed. Why? Because in the process of having to defend why I believe the earth is round, I discovered that I actually can’t explain that very well. So, maybe I have to go google some stuff or look it up. And there’s going to be good stuff that comes out of that, because I’m going to be more likely to actually moderate — because I’m, sort of, poking around in my knowledge a little bit.

And then the last thing, I think, that comes out of this, that’s really good — is that once you get into this idea of “convey” versus “convince,” you realize that you don’t need to agree to decide. You need to inform to decide. And that the idea that all of you would be on equal agreement about whether you should do something or not is completely absurd, because we don’t have to — because that’s the whole point. If you thought that that was the goal, why do you have more than one person on the team?

Jeff: Yeah. You want a diversity of opinion. <Right.> And if you don’t tease out the different opinions, then you make an inferior decision. I actually thought this was one of my management secrets. Like, you just outed it in your soon-to-be bestselling book.

Annie: Yeah. So, actually, what’s interesting about that problem, I think that teams often act like a pros and cons list where…

Jeff: Interesting, yeah.

Annie: …we have the intuition that more heads are better than one. So, when we bring more heads into a decision, you have this decision that feels much more certified. But what we know is that when you allow people to make these decisions in, sort of, committee style, like in a team meeting — that the decision quality often isn’t better. And there’s lots and lots of science that shows this.

Shortening feedback loops

Jeff: So, one of the things in venture that is often cited as a challenge in decision-making is [that] the feedback loops can be forever. What’s your take on that — feedback loop in decision-making?

Annie: Yeah. So, basically my take is that there’s actually no such thing as a long feedback loop. Which I know sounds weird, right? Because, obviously, you’re saying, like, we invested in a company — we find out how it exits, like, 10 years from now. Isn’t that a really long feedback loop? But the thing is, I mean — going back to this idea that when you make a decision, it’s a prediction of the future — it’s not like you’re just predicting what the exit is going to be. You’re predicting a whole bunch of intermediating states of the world. And that might be just like, for example — like, what is the arc of the ability to attract talent for this particular founder? Just, like, as an example, right? You know, obviously, is it going to fund at the next round. I mean…

Jeff: It’s a good example, the funding management team.

Annie: Right. If you knew for a fact that they weren’t going to be able to hire a good team, you won’t invest in them. So, it’s really good to, sort of, make predictions about those things, and make them probabilistically. Because as you’re making these types of forecasts now, over the course — in a much faster time period, you’re starting to see — when we say that there’s a 60% chance that this intermediating state of the world is going to exist, does it absolutely exist 60% of the time? 

Because in the end, the thing that’s so important to understand is that you are saying that you’re an expert at the market that you’re investing in. So, you want to be explicit about the things — those predictions that you’re making about that market, both near term and far term — so that you don’t have to wait around 10 years. Because the thing is, you’re going to have to make another investment in between. You can’t just make the investment, wait 10 years, get your feedback, and then make another investment. And now, if you’re actually being explicit in the way that you’re thinking about those things, you can actually create much tighter feedback loops.

Jeff: It’s just aggregating to get it into a set of milestones?

Annie: Right. There’s no reason you can’t do that out in the world. One of the knocks that people will say about poker is, “Oh, but you get really fast feedback and so, pooh-pooh on you.” And I’m like, well, yeah — except that it’s really just a compressed version. There’s the end of the hand, which is what you’re thinking about. But in between the start of the hand and the end of the hand, there’s all sorts of predictions that I’m making in between.

Jeff: I’ve been an investor for nine years. The feedback loop is 10. I’ve made 35, 40 decisions. If I deferred any learning to the end, it would be pretty wasteful. And there’s another psychological thing we fight — the phrase is, “Your lemons ripen first.” So, if your company goes for 10 years, there’s a pretty good probability it’s going to have a good outcome. <Right.> But the ones that die after, you know —  can’t raise the next round, can’t have the management team. That’s when your negative outcomes manifest before your positive outcomes. And psychologically, you have to manage through that.

Annie: Yeah. So, this actually, I think, gives you a tool to be able to do that, because you have no secondary way to be right.

Jeff: Yeah, it does.

Annie: Like, how am I doing in terms of, like, calibrating around how likely I think this company is to fail? You know, in what ways is it going to fail? What does that actually look like? The other thing that comes from that is that — when you make yourself, sort of, break this into its component parts, when you actually force yourself to do that — I think it actually improves the knowledge that goes into it. Because you have to start thinking about — what are the things that I know, what are the things that I can find out? What are the perspectives that I could consider? What are the mental models that I could apply that will help me with this prediction? Because it is now recorded — it is part of that evidentiary record, which we’ve already said is incredibly important — that allows you to have that look back. And because you know that you’re accountable to it, I think it actually improves the accuracy of the original decision. Because it makes you be more fox-like rather than hedgehog-like, because you know that there’s going to be a look back.

Basically, fox-like thinking is looking at the world from all sorts of different perspectives — applying lots and lots of different mental models to the same problem to try to get to your answer. And hedgehog is like — you approach the world through your one big idea. So, you could think about, like, in investing — you have like one big thesis, instead of looking at it from all sorts of different angles. Generally, what you find is that fox-like thinking is generally going to win the day. And this is something like Phil Tetlock — I’m sure a lot of people are familiar with “Superforecasting” — talks a lot about. So, apart from the fact that you can speed up the learning cycle, I think it actually improves the decision in the moment — the knowledge that at some point someone’s going to look back at it.

Jeff: Yeah. I think that’s absolutely true, and it’s a good tool. And we may start implementing that at the firm really soon. You know, as investors, we have the — we get the benefit of being able to make a basket of decisions, you know, diversification. A lot of the people making decisions are making, like, one decision. What is the impact of optionality? How do you deal with, you know, that one decision?

Annie: So, first of all, here’s the secret. Your decisions are a portfolio, because you make many of them in your life. And I understand, one decision — like this particular product decision. But that’s actually, kind of, like a false segregation, because you’re, kind of, working across different decisions. But I do understand that some decisions you’re making feel like they’re much higher impact. Like, when we go back to the happiness test. Obviously, like, when you’re, sort of, putting your eggs in one product basket, this is something that if it goes wrong, it’s going to have a very big effect on your ability to achieve your long-term goals. But that doesn’t mean that you can’t think about, “How can we just, sort of, move fast? And then, how would we then apply this to making a higher quality decision about something like that?”

So, one of the things that we want to think about besides impact, when we’re considering how fast we can go, is optionality. Which is really just — if we’re on a particular route, how easy is it for us to exit? Can we get off the route? Because obviously when we choose a particular option, we’re foregoing other options. And there’s obviously opportunity costs to those — to not choosing those options. And what we’re doing is we’re saying, “This action compared to others is going to work out better for me, a higher percentage of the time, than other options that I might choose.” But we know that after you choose something, sometimes new stuff reveals itself, or the world tells you some things — that maybe this isn’t a road that you want to be on. So, then the question just is, how easy is it for me to get off the road?

So, one of the things that we want to look at is what people call type one or type two decisions — or Jeff Bezos says two-way door, or one-way door decisions. That when you have a two-way door, when it’s easy for you to quit — and either go back and choose an option that you previously rejected, or choose a new option that you hadn’t previously considered — that we can go faster. Because really, it’s a way to mitigate the downside, right? If I’m kind of on a bad route, I can at least get off and try to figure out how to get onto another route. So that would be like going on a date — super quittable. I can leave in the middle if I want. Getting married — a little harder, less quittable, right? So, you know, taking a few classes online — much easier to, sort of, quit than, like, actually committing to a particular college. Or renting, more quittable than buying.

Jeff: By the way, it turns out doing online classes and going to college is now the same thing.

Annie: It is. My children will tell you that. That is so true. But the more quittable something is, the faster we can go — because when we can quit, obviously that mitigates the effect of observing the downside outcomes. The other thing we can do is actually think about portfolio theory, but for decisions that we don’t think of as investments — even though all decisions are investments. Which is, sometimes we don’t need to choose among them. So, you can date more than one person at once, right? I actually don’t need to choose between these two options. I could actually do both at once. And then I can, kind of, figure out which one’s working better. And, you know, we did this with, like, A/B testing in marketing. That happens in software development — where you’re, sort of, trying to decide between two features. And you develop them in parallel, and you test them with one set of users, and another set of users are seeing different features.

Jeff: A number of businesses do business locally. You’ll have restaurants in San Francisco and LA — delivers groceries in San Antonio. You can charge — you can have different pricing approaches in the different markets and just learn. I mean, no one in San Antonio is going to know what you did in Montpelier, Vermont. So, try it out and you learn and learn and learn, then you go national.

Annie: Exactly. When we can do things in parallel, obviously we’re also better off. And then the other thing is sometimes you have an option that isn’t quittable, but you can still quit it because you can negate it. So that would be, like — let’s say that I’m invested in a stock, and it’s totally illiquid — have no ability to sell it. If I could find a stock that’s perfectly negatively correlated with the stock that I own, and I buy that in an equal amount, I’ve now solved my problem. So, I’ve quit it even though it wasn’t liquid. That’s just hedging. So, if you can find something that’s, kind of, negatively correlated with the first thing, then you can actually go faster. So that, you have to think about in advance, right? This thing is pretty illiquid. It’s gonna be hard for me to exit. Is there something where if new information reveals itself, I can, kind of, just negate that decision? And if you can do that, then you can also go faster.

Decision-stacking

Annie: So, now that we’ve, sort of, understood, like — there’s the impact of the decision, and then we have this optionality thing — like, can you quit, can you hedge? We can now get to this idea of decision stacking. Which helps us when we have to make this big bet — is to say, what are the things that I can do before — that are going to help me to gather information? So that when I do have to make that big bet that’s going to be hard to reverse, my model of the world is going to be better. So, how can I start to use this idea of making some little low-impact decisions, just to kind of see what’s going on — to do some things in parallel? I can blunt it in order to start building better models of the world, so that when I do actually put this out into the world, then I know something more about the market.

So, when you know that you’re going to have one of those on the horizon — I mean, they normally don’t just, like, hit you by surprise. So like, “Oh crap, I’ve got this decision to make!” It’s just really good to try to stack these other types of decisions in front of it. Because when you do actually have to make that decision tree — when you are actually trying to figure out like what the user uptake of something is going to be or, you know, whatever — what people are willing to pay for something —your model is just going to be so much stronger for having thought about what are the things that I could do in front of that really big decision.

Jeff: De-risking. You know, trying to get all these little nuggets of directional information to give you higher confidence in the really big decisions.

Annie: Yeah. And you can even apply this in, like, all sorts of different places. But, you know, the classic thing is dating before you marry. One of the things that I find is that when people aren’t, like, 90% sure that it’s the right path, that they’re pretty reticent to actually execute on it. But, you know, we have to make lots of decisions where we’re 60%. And by the way, when we estimate ourselves to be 60% on something, we’re overestimating that — because we’re just deciding under uncertainty. It’s just, kind of, how it is. We don’t have a lot of information. So, once you have an option that appears to be significantly better than the other ones, you just have to do a final step — which is to say to yourself, “Is there some information that I could find out that would cause me to flip this option in relation to the other options that I have under consideration?” And now it just becomes really simple. If the answer is yes, you can just say, “Can I afford to go get it?” You might not be able to because of time or money. And if the answer is yes, I can afford to go get it, go get it. If the answer is no, look — this is the state that we’re always making decisions under. I don’t have a time machine. My decision-making would be much better if I had a time machine. Sadly, I have none.

Jeff: That’s the next book, the time machine.

Annie: The time machine, right. I know, right, exactly.

Jeff: This has been a fascinating session. Thank you for spending the time with us on the “a16z Podcast,” to paraphrase Sonal.

Annie: I am so grateful to have gotten to come on and to get to discuss this stuff with you. I had so much fun.

Jeff: I’ve been looking forward to this conversation for quite a while.

Annie: No. I’m so excited, because I did get delayed a little bit due to a small misprint.

Jeff: That wasn’t a small misprint. That was a big misprint. And now, I have an eBay collector’s item, which I’m the perfect person to know how to monetize.

Annie: Yeah, right. So, for people who don’t know, is that — books get printed in, sort of, 20-page sections that get bound together. And really, a lot do with COVID — one section got printed twice, and one was totally missing.

Jeff: I was just questioning my mental facilities while reading, because I was…

Annie: But don’t worry. It’s been repaired.

Jeff: Excellent.

Annie: October 13th, when the book is out, you will get an appropriate copy.

Jeff: That will be awesome. I can’t wait.

image description Looking for more episodes?
Find them wherever you listen to podcasts.