8 Comments
author

Added remarks of clarification:

1. This post is not saying that no disasters ever happen.

2. It also isn't saying that no one should ever try to avoid disasters.

3. It is saying that people greatly over-predict disasters, i.e., most predicted disasters do not occur. Notice how this is compatible with points 1 and 2.

Expand full comment
author

As I mention here, we may be doomed: https://fakenous.substack.com/p/we-are-doomed

But whatever kills us will probably come as a surprise.

Expand full comment

Without the counterfactual, it’s hard to know whether Y2K was a nothingburger, or there indeed were some disastrous potential flaws in critical infrastructure, but the alarm was raised in time for these to be found and eliminated before the deadline.

Expand full comment

All agreed! The most serious one now, of course, is AI...

Expand full comment
Comment deleted
Expand full comment

"Fanaticism about world-ending disasters is likely much better than being calm from a moral perspective."

Fanaticism creates misallocation of effort/resources and moral hazard. Thinking about expected value is reasonable for decision making. Hyperbole isn't. Fanaticism is what drove most of the humanitarian disasters of the 20th century. Nothing gets the killing rolling like fanatics.

The problem isn't that there are many, potential world ending disasters, it's that predictions of doom are constantly overplayed by those with some authority. The likelihood that the breathless warnings by "experts" and their platforms reflect actual, existential risk and actionable expected value, is very low.

Expand full comment
Comment deleted
Expand full comment

"Given the potential loss if everyone dies, treating even small risks very seriously is morally warranted." is very different than "Fanaticism about world-ending disasters is likely much better than being calm from a moral perspective."

I'm very aligned with keeping (esp. large numbers of) people from being killed. Since ranges of the size/timing of existential threats are huge, what values should be used for the estimated values? How do we sort priorities? Who gets to choose the inputs and the calcs used? If "we" choose, what's to keep the the process from being politically captured? Typical answers to those questions are what make me very nervous about advocated risks and solutions.

These questions align well with what Huemer wrote. If you extend doom scenarios from natural existential risk to reactions to societally existential risks - like the 20th century pushes of (various brands of) socialism, ostensibly to stop poverty/inequality/overpopulation/immorality, you can end up with 10s (or 100s) of millions of people dead. Based on these recent examples and common rhetoric, caution is warranted.

"...but we would only expect observers in worlds in which world-ending doom events did not come to actually occur." I think I understand what you mean here, but could you expand on this?

Expand full comment

But you still have to prioritize.

I could take your survivorship bias comment as an attempt to just refute MH, as in, this time it is different, climate change ...etc. But I could also read it from the perspective of Taleb, which might suggest that rather than trying to avert specific disasters which are not well understood, we should try to assure ourselves that society could recover from a broad range of disasters.

Expand full comment

I like John Michael Greer’s solution to the Fermi Paradox (sensible and doesn’t tend to catastrophising).

https://www.resilience.org/stories/2007-09-19/solving-fermis-paradox/

Expand full comment