10 Comments
author

Added remarks of clarification:

1. This post is not saying that no disasters ever happen.

2. It also isn't saying that no one should ever try to avoid disasters.

3. It is saying that people greatly over-predict disasters, i.e., most predicted disasters do not occur. Notice how this is compatible with points 1 and 2.

Expand full comment
author

As I mention here, we may be doomed: https://fakenous.substack.com/p/we-are-doomed

But whatever kills us will probably come as a surprise.

Expand full comment

"Despite sky diving, drunk driving, playing Russian roulette, and abusing hard drugs, I have yet to die. Therefore, these risks are likely overstated and I can be calm." In this scenario, it is clear that someone is suffering from survivorship bias. A person who has died is not around to talk about deadly risks they've averted.

On the global scale, we have no other references. In fact, the lack of other alien civilizations might be a cause for concern, pointing toward a great filter we perhaps have not yet passed. If we were to imagine millions of universes that ended due to existential disaster, I think they would all claim that "nothing has yet killed everyone on earth!" I am not sure that this is a reason to remain calm.

Killer bees and Y2K might be rather silly, but there are legitimate global existential risks that we should take very seriously [1]. Even small chances of eliminating all existing life pose a tremendously large cost to possible welfare. Fanaticism about world-ending disasters is likely much better than being calm from a moral perspective.

[1] https://nickbostrom.com/existential/risks

Expand full comment

"Fanaticism about world-ending disasters is likely much better than being calm from a moral perspective."

Fanaticism creates misallocation of effort/resources and moral hazard. Thinking about expected value is reasonable for decision making. Hyperbole isn't. Fanaticism is what drove most of the humanitarian disasters of the 20th century. Nothing gets the killing rolling like fanatics.

The problem isn't that there are many, potential world ending disasters, it's that predictions of doom are constantly overplayed by those with some authority. The likelihood that the breathless warnings by "experts" and their platforms reflect actual, existential risk and actionable expected value, is very low.

Expand full comment

Given the potential loss if everyone dies, treating even small risks very seriously is morally warranted. This can be true without relying on hyperbole. People getting killed is what I am trying to advocate for avoiding.

Predictions of world-ending doom could be overplayed or they could not be, but we would only expect observers in worlds in which world-ending doom events did not come to actually occur. This is my point. It is a bias we should be conscious of.

Expand full comment

"Given the potential loss if everyone dies, treating even small risks very seriously is morally warranted." is very different than "Fanaticism about world-ending disasters is likely much better than being calm from a moral perspective."

I'm very aligned with keeping (esp. large numbers of) people from being killed. Since ranges of the size/timing of existential threats are huge, what values should be used for the estimated values? How do we sort priorities? Who gets to choose the inputs and the calcs used? If "we" choose, what's to keep the the process from being politically captured? Typical answers to those questions are what make me very nervous about advocated risks and solutions.

These questions align well with what Huemer wrote. If you extend doom scenarios from natural existential risk to reactions to societally existential risks - like the 20th century pushes of (various brands of) socialism, ostensibly to stop poverty/inequality/overpopulation/immorality, you can end up with 10s (or 100s) of millions of people dead. Based on these recent examples and common rhetoric, caution is warranted.

"...but we would only expect observers in worlds in which world-ending doom events did not come to actually occur." I think I understand what you mean here, but could you expand on this?

Expand full comment

But you still have to prioritize.

I could take your survivorship bias comment as an attempt to just refute MH, as in, this time it is different, climate change ...etc. But I could also read it from the perspective of Taleb, which might suggest that rather than trying to avert specific disasters which are not well understood, we should try to assure ourselves that society could recover from a broad range of disasters.

Expand full comment

I like John Michael Greer’s solution to the Fermi Paradox (sensible and doesn’t tend to catastrophising).

https://www.resilience.org/stories/2007-09-19/solving-fermis-paradox/

Expand full comment

Without the counterfactual, it’s hard to know whether Y2K was a nothingburger, or there indeed were some disastrous potential flaws in critical infrastructure, but the alarm was raised in time for these to be found and eliminated before the deadline.

Expand full comment

All agreed! The most serious one now, of course, is AI...

Expand full comment