We Are Doomed
Far Future Doom
Obviously, humanity will at some future time be extinct. That goes without saying. That’s almost a metaphysical truth; nothing (of the relevant kinds) lasts forever.
There is a fascinating Wikipedia article about the far future, https://en.wikipedia.org/wiki/Timeline_of_the_far_future, which includes (among other things) many events that could extinguish life on Earth. The Sun will leave the main sequence (running out of hydrogen) within about 5 billion years. It will probably engulf the Earth within 8 billion years. Long before that, though, multiple other disastrous things are expected to happen. One item says that within only 600 million years, all plants that use C3 photosynthesis (99% of all plant species) will die. Another item says that the rest of the plants will probably die within 800 million years.
I don’t think any people are going to live to see any of that happen, though. I think we’ll die of stupidity long before that. (Life will probably still continue without us, though. E.g., the bacteria will have hundreds of millions of years to flourish without us.)
A Story of Early Doom
Hypothetical: Suppose we learned that a large asteroid was on a collision course for Earth. To best illustrate my point, let’s make up some more details. Suppose that somehow, we know 30 years in advance that the asteroid is coming. Scientists are unsure of whether the asteroid will actually hit us. Some say it will probably hit us; others say that it will almost certainly miss. The median estimate, let’s say, is a 5% chance of hitting the Earth. All agree that the impact would be disastrous, though they disagree on exactly how disastrous it would be.
Suppose, further, that engineers have devised various plans for averting the collision, each of which would require at least several years to implement, and would cost billions of dollars. There is disagreement on exactly how effective each plan is, how much each would cost, and how long each would take to complete. Every expert agrees, though, that at least some plan (if not multiple plans) should be attempted. Finally, to make my point clear, assume that the asteroid will in fact hit the Earth if nothing is done (though the scientists in the scenario are not yet certain of this), but that some of the plans people have devised would in fact work, if adopted in a timely manner.
Question: Would we avoid the asteroid impact?
If you asked me this hypothetical 20 years ago, I would have taken for granted that humanity would, one way or another, come together to stop the threat. The last several years, however, have showed that human beings are a good deal stupider and all-around crappier than I previously comprehended. So today, I think there is a pretty high chance that some of the following would happen:
(a) Some political party takes up the cause to avoid the asteroid impact. The opposing political party or parties then immediately decide that “their side” must be pro-asteroid (or anti-asteroid-avoidance). The latter party uses their political power to stall asteroid avoidance plans. Members of the pro-asteroid party who cross the aisle and try to cooperate on asteroid-avoidance get labeled traitors by their party, whereupon they face primary challengers and are kicked out of office.
(b) Asteroid skeptics point to uncertainties in the science, arguing that we have no solid evidence that the asteroid is actually going to hit the Earth. They tout the most optimistic arguments about the asteroid, and magnify all uncertainties in the case for global disaster. They also point to common cost overruns in government programs and argue that we shouldn’t commit to spending unknown billions of dollars to avert a threat that almost certainly isn’t even going to kill anyone.
(c) Different groups of humans can’t agree on who should pay for asteroid avoidance. The Americans want China to pay more; China wants America to pay more. Both are angry at the Russians for refusing to pay anything, and nobody wants to be a sucker and let other nations free ride.
(d) The average human, having never witnessed an asteroid impact, does not intuitively believe that such things happen, and he refuses to believe the "arrogant", egghead scientists. Web sites appear from trolls and opportunists with conspiracy theories about how the mainstream scientists are all lying and/or incompetent. Some say that you can just look up in the sky and see that there are no asteroids. They argue that there are no large asteroid impacts reported in all of human history, and thus this one is probably a hoax. They try to associate the asteroid theory with particular “identity groups”, and people who don’t belong to those groups then instinctively reject asteroid avoidance. These trolls get money because their unhinged claims attract clicks and hence generate revenue.
(e) More balanced news sites give equal attention to the orthodox position and the skeptical position, represented by the three scientists in the world who think the asteroid isn’t a serious threat.
(f) The U.S. President (who knows that he personally will not be around in 30 years) declares that the asteroid is “fake news” and a very dishonest hoax invented by the biased media and/or greedy astronomers trying to shake down the government for more money for their field. He tweets that if anyone just looks at the telescope images, they can see that the asteroid is a hoax. Millions of his followers retweet these comments, without in fact looking at the telescope images. A few others look at the images and find themselves unable to verify that the asteroid is really on a collision course, whereupon they conclude that the mainstream scientists are wrong.
(g) As it becomes clear that nothing is being done about the asteroid, scientists become increasingly active in trying to convince the masses. They try a variety of approaches. Some try sober, well-reasoned analysis. These scientists, however, are ignored because they are boring; also, skeptics take the scientists’ calm demeanor as proof that the scientists must be lying about the seriousness of the threat. Other scientists make increasingly alarmed and emotional appeals. The latter scientists, however, are dismissed by skeptics as being too emotional and obviously partisan.
(h) Nearly every person working in government knows that the asteroid threat is real, but many of them worry that they’ll be voted out of office if they try to do anything about it, because the asteroid issue is unpopular among the masses. They reason that it’s not worth losing their jobs for a small chance of saving humanity; also, they correctly reason that any given one of them can’t actually make a difference, if they don’t have the rest of their party behind them. Therefore, around half of the political leaders vote to do nothing. Or they vote for a “compromise plan” that takes only very weak, unlikely-to-succeed measures against the asteroid.
(i) When a member of party X complains about the asteroid threat and our failure to do anything about it, members of party Y ignore the issue and immediately start babbling comments like, “What about all the issues that your party hasn’t done anything about? What about the crimes committed by such-and-such politician? What about the threat of nuclear war, or biological weapons, or terrorism, or cancer? Cancer has killed a lot more people in history than asteroids!” This succeeds in derailing the conversation and preventing people from party Y from thinking about the asteroid for more than a few seconds at a time.
(j) The above goes on for 29 years. In the last year, scientists come to 100% agreement that the asteroid is in fact going to hit the Earth within a year and kill everyone. They also agree that it is too late to do anything about it. About half of all average human beings refuse to accept this, up until the day that the asteroid hits, killing billions of people and triggering Earth’s largest mass extinction.
Dying of Stupidity
That’s an example of what I mean by “dying of stupidity”. Specifically, I have in mind a scenario in which:
A threat is identified by experts well in advance,
It is agreed among experts to be serious (though there need not be agreement on exactly how serious),
There are technically feasible plans known to experts that would stop the threat,
The cost of trying to avert the problem is easily worth it and well-known, among the experts, to be so, and yet
The threat is not stopped.
Any minimally smart species would in fact avert any such threat. From what I’ve observed of human beings, however, we are not such a species. There is thus a good chance that we will die of stupidity in the above sense.
Existential Threats
The above story is just an example. I don’t think we are actually going to die of an asteroid impact. We’ll probably have died of something else long before the next big asteroid hits.
In some ways, the asteroid scenario is actually a poor choice to illustrate my point. The asteroid threat is already on some people’s radar screen (pun intended), and some people are already looking out for asteroids. The scenario of a huge asteroid impact is simple, sensational, easy to understand, and relatively far from the main hot-button ideological issues. It’s also not all that expensive to avert. So there is a pretty good chance that we will develop an adequate asteroid defense before one becomes needed, assuming that nothing else kills us first.
The real thing to worry about would be a threat that is complicated or subtle, so that you need expertise to even understand how there is a threat; one that works over a long period of time and with no well-defined ending point; one that touches on hot-button ideological issues; or one whose probability and time of occurrence are extremely difficult to estimate. Those are the kinds of threats that we’re not going to address until it is too late.
By the way, in case you think my story is a metaphor for global warming, it isn’t. My story is meant as an example of one possible existential threat among many, most of which we probably cannot now anticipate. Global warming is not actually an existential threat – although the way people have responded to it should give us apprehension about how we would respond to a genuinely existential threat.
Take, for example, the projected end of C3 photosynthesis in 600 million years (about which I have almost no knowledge, as I am no biologist). To believe that this event is going to happen, one has to have a certain degree of intelligence, plus either expertise in biology or significant trust in experts – none of which the average American presently has. Also, it’s plausible that averting this event (if it could be averted at all) would require planning and very large-scale action, very long in advance. There would not, however, be any specific time – no particular year, or even any particular century – at which one could say that the plan had to be started. So it’s likely that there would be no particular point at which that issue would rise to the top of the political agenda. There would be no election year in which it would be politically advantageous to campaign on a promise to save C3 plants from extinction millions of years in the future.
Species Suicide
I don’t know what the most likely existential threat is. But here is one kind of scenario that I think is particularly likely: humanity is wiped out by one or more human beings working deliberately for that goal.
Aside: Some people worry that out-of-control AI may kill us. But I think we should worry more about out-of-control humans. We already have those. They already possess intelligence, unpredictable motives, and often insane, evil beliefs. Computers are way more predictable and controllable.
In the future, humans are going to have access to more and more powerful technology (including increasingly sophisticated computers, for any technical reasoning that is needed to carry out their plans). So it will become easier and easier to cause a large amount of harm. Now, you might say that advanced technology can also be used for defense, to protect against out-of-control people. That is true; however, it is almost a law of nature that it is easier to destroy than it is to protect or create things of value.
For example, at some not-too-distant future time, it might be technically feasible for an individual person to genetically engineer a virus capable of wiping out humanity. We might develop technology whereby a moderately intelligent person could produce stuff like that – perhaps with computer assistance, this person would not even have to be particularly expert in biology or medicine. If that technology appears, we are doomed. Someone is going to do it. Once the virus is released, it might be difficult or impossible to stop it.
Again, that is just an example. We will probably develop other technologies, thus far undreamt of, that will make it easy for individuals or small organizations to cause enormous amounts of harm. Most likely, these technologies would not be originally designed to cause harm. It’s just that powerful technologies that can do extremely valuable things will also generally let you do extremely bad stuff, if you have the opposite motives. Since we haven’t figured out how to control insane humans, and since a good number of us are crazy, we are very likely going to kill ourselves long before nature destroys us.
I don't have a particular solution for this. I think we have to hope that humanity becomes less stupid and evil over the next few centuries, before whatever unknown threat appears that's going to require coordinated action to prevent our extinction. Of course, since we are so awful now, most of us don't give a crap whether that happens or not.