Team Risk Neutrality

Quality note: I'm trying to practice speed writing. This was done quickly and is a bit stream of consciousness -- to make it better I'd split it into sections. 

Lots of people in the effective altruism community, especially longtermists, feel like it's hard to contribute. They read arguments for doing effective altruism-style work, and they want to help. But a lot of the advice for how to get best help is intimidating. E.g. At 80,000 Hours we recommend widely that they try their hand at pursuing one of our 'priority paths' -- a cluster of options that include working in technical AI safety,  biorisk policy, and at a handful of organisations in the effective altruism community. But these are hard paths to follow. Standards are high, competition is stiff, and you probably need a bit of luck on your side..

And yet we still encourage people to try for these roles. Go for that PhD; apply for that job. Or go into government and see if you can gain influence over big budgets in order to direct them better. You might not -- a million things could get in your way in a government bureaucracy -- but if you do you will be able to do a lot. It'd be a wild success. So it's worth trying for.

The argument from the effective altruism community's perspective -- considered as a team acting in a coordinated way -- is that even though for each person reading the advice, their chances of succeeding wildly are low, it's really hard to tell ahead of time who will eventually succeed wildly and who won't. So we want to have lots of people try their hands at these difficult goals so that we have the best chance as a group at having a few people really make a lot of progress. 

From the individual's perspective, the argument is that these priority paths are often their biggest upside options. It may be unlikely to work out, but if it does it might be such a big success that it looks great in expected value terms. If you're risk neutral about your personal positive impact, it makes sense to pursue it. 

These two arguments of course dovetail. They are really two sides of the same coin. If everyone takes their highest expected value options, the expected value of the team's actions is highest. 

But this all means of course that a lot of people will try things and not succeed. And if people try for these difficult but high reward paths and don't make it (which is likely!), they might feel like crap. They didn't have the impact they were going for impact. So, in some sense, they didn't do what they intended to do.

There's an obvious sense in which it's true that they didn't make an impact, at least if we suppose that by going for the upside option and failing they didn't make a counterfactual difference to how things unfold in the world.

But there's another sense in which this reaction is wrong. First of all, sensible consequentialists are ex ante consequentialists -- an action is right if it had the best expected consequences even if it didn't turn out to have the best consequences. So if they intended to do what was right, they succeeded by trying for the upside option, regardless of the results. 

Second of all, the action looks a lot better when we remember that we are not acting alone. As a team, we want our impact to be much greater than the impact of any individual. And like a sports team, our members should be able to be proud of or sorry for the collective actions we take and not just their own individual actions

Think of a football team. Team members are asked to defend vulnerabilities that won't be shot on, just in case they are. If their team wins, they should be proud. They should go to the after party and have several pints. An army works the same way. 

I think we can stand to think of the effective altruism community more in this way, as a team that succeeds and fails together.

One response is that it's easy for me to say that -- I have a job that many people think is high impact. I can say let's all act as a team and it's ok if you don't personally make an impact, as long as you play a part in the collective action of taking a lot of shots at an upside option. After all -- my own personal impact is secure. 

The thing is, it's very much not. As a longtermist, I am also taking a bet on an upside option. We don't know how to positively affect the long-run future. We guess it's helping AI go well and trying to reduce the risk of extreme climate change and otherwise trying to reduce existential risk, by helping others enter these fields and succeed. We're taking shots. We hope a few go in. We probably won't even know if one ever does.

Everyone in the longtermist community is in this position, whether they're a wildly successful AI safety researcher or they're applying for roles they're not getting. None of us can feel secure that we're making a positive impact on future generations with our efforts. 

What is true is that we confer more status on people who are pursuing an upside option as part of a role that is thought of as optimised for it, or part of an official role at all. But we can fix that. 

If you are following the strategy of "grow EA in case someone you recruit makes a huge positive difference", we should celebrate that. If you are pursuing the option of "try a lot of research programmes in pandemic preparedness in case one works," we should celebrate that. If you are pursuing the option of "try to rise up in governments in order to direct budgets more effectively," whether or not you succeed, we can and should celebrate that.

I know it's not that simple, that it's a deep-seated tendency to think of failure in a job application process as more of a failure than failure to make an impact as a celebrated safety researcher. And I know there are other issues like money and location and opportunity to work with others in your team on the day-to-day. But I think we can make progress on this. I think we can be a lot more celebratory of people shooting for the stars even if they don't make it. That is a really awesome thing to do. It takes courage, it's the right thing to do because of its expected value, and it's instantiating a team strategy that will have a greater chance of paying off.

Previous
Previous

Notice when things are important

Next
Next

8 things I recommend you buy and use