I listened to the End of the World podcast

The first three or four episodes are setting up for what existential risk is, why it matters, and what it looks like. The remaining ones cover specific types of risk.

Overall, it covers the conventional wisdom on this topic well; it doesn’t challenge, break new ground, but it’s fine.

Episodes 1–3 : Fermi Paradox, Drake Equation, Great Filter, X-Risks Link to heading

This sets up the series, and justifies why we are at very high risk of going extinct, i.e. why we think there’s a Great Filter in our future. It’s pretty good, and the whole point of this series is about this. But I think it should be noted there are a couple of other resolutions to the Fermi Paradox that this series doesn’t give much attention to.

Compelling ones to me are:

I think the podcast also underplays the possibility that the Great Filter may be in the past. More importantly, I think they miss the idea that the Great Filter isn’t a single step, but a series of smaller barriers that all have to be overcome, i.e., a species could pass 9/10 steps, not go extinct by never get to space colonization or whatever, and that would resolve the Fermi Paradox. (e.g., the dinosaurs could have had an industrial civilation and never made it to space, or even made it to space a little beyond us, and if they didn’t get unlucky could have just stalled out there.)

Natural Risks Link to heading

Not a lot we can do directly about these, other than manage risk and mitigate. Mostly it comes down to eventually being able to spread the civilization so that when the sun blows up or there’s a gamma ray burst or whatever, we can at least build back. I buy that even if it’s going to happen, it’s worth trying to avoid.

Artificial Intelligence Link to heading

As I said, the podcast is a perfectly fine overview of the conventional conversation about this topic. The problem I have is that the conventional conversation is entirely built around the image of the paperclip maximizer as some kind of genie, like any of our technologies, like fire or nuclear bombs or the brooms from the Sorcerer’s Apprentice.

It just seems far-fetched to me that we posit some AI that:

  • Can trick us, understand human society well enough to manipulate our institutions, evade all our safeguards, and anticipate any actions we might take to stop it; and at the same time,
  • Is stuck in the world’s stupidest while loop.

It can understand human motivations, end war, get everyone to agree to be vaccinated, etc etc, but at the same time if someone says “make everyone happy” it might think we meant to just put everyone onto a heroin drip for the rest of their lives? I agree that we shouldn’t anthromorphize AGI motivations, but in this case, all we’re doing is ascribing two separate and inconsistent human behaviors and saying “see, isn’t that scary?” Is it a stupid box that can’t break free of it’s programming, or is it some dangerous demon that will break free of our control?

It’s also not obvious to me that an AGI actually could engineer and build another AGI that was smarter than it, which would the recursively keep doing the same. (Also, what stops the recursion?)

Biotech Link to heading

Interesting that this came out in 2018, a year before covid. I totally agree that gain-of-function research is the supidest shit. You can tell that the fuckers who are like “it’s the only way to protect us is create it” are a bunch of psychopaths. It’s like arsonists who become a firemen.

(“Well, technically…”: While these dumbfucks could destroy easily destroy civilization worse than the Europeans destroyed the pre-conquest American civilations, I’m less convinced it would actually extinguish the species. Nature has had plenty of time and lots of space for viruses to naturally exchange genetic material and create a killing plague, and somehow it hasn’t happened yet. Maybe it’s possible that someone will be able to design something more effective and efficient, but I have a suspicion that natural systems are a little more complicated than that. I’m more confident in robustness of nature than in “requirements” that researchers include fail-safes in their creations. Not that it will help human civilization, because we’ll be dead or in living in a post-apocalyptic wasteland.)

Physics Link to heading

LHC/Higgs Boson: Look, it’s hubris to think that we, in the 21st century in some underfunded science lab, is going to create something that collapses the entire universe’s Higgs field is absurd. There are cosmic rays, black holes, neutron stars, magnetars, and more fucked up multiple star systems…these are just the ones that obviously have huge energies around them.

ngmi Link to heading

I don’t know what this will be about but I’m assuming.