Ars on your lunch break: Let’s talk about the extinction of humanity

Welcome back to Ars on your Lunch Break! It’s been a while since we’ve done this, so I’ll start with a brief orientation. This series is built around the After On Podcast—which itself is a series of deep-dive interviews with thinkers, founders, and (above all) scientists.

Often exceeding 90 minutes, After On episodes run longer than the average busy Ars reader’s lunch break. So we carve these unhurried conversations into three to four 30-ish minute segments, and run ‘em here around lunch, Ars Daylight Time. You can access today’s segment via our embedded audio player, or by reading the accompanying transcript (both of which are below).

We’ve presented two seasons of these episodes so far and are planning a third one in the fall. As for this week’s run, it’s sort of a summer special. The impetus is a talk I gave at April’s annual TED conference, which TED will debut on their site’s front page tomorrow. I was asked to speak as a direct result of a two-part podcast interview I ran in late March. Some quick cocktail napkin math may tell you this gave me about 10 days to prepare my talk.

It’s not every day that I give a main-stage TED talk (it’s once every 2,603 days, based on precisely two data points). So we’re marking it with a one-off serialization of the After On interview that triggered it. Segments will run daily through Thursday, and tomorrow we’ll also embed a video of the TED talk (and the next extended series of these interviews will come to Ars in the fall).

Let’s talk about death, baby

This week’s guest is Naval Ravikant. Ravikant is a renowned angel investor and entrepreneur who conjoined these callings by founding AngelList in 2010. AngelList is now a fundraising juggernaut, and almost 30% of significant US tech startups have raised at least some money through the platform in recent years.

But our topic this week is something quite a bit darker than entrepreneurial finance. Specifically, it’s . This refers to a set of dangers which might, in a worst-case scenario, imperil humanity’s very existence. Many of these dangers could be enabled by near-term scientific and technological developments. Naval and I will particularly focus on risks connected to synthetic biology. This is ironic, because as regular After On listeners know, I’m a hopeless synbio fanboy. Naval and I will also touch more briefly on certain risks connected to superintelligence research.

Unusually for my show, this is more of a conversation than an interview—because neither of us are what you’d call existential risk professionals. Rather, we’ve both thought, read, and spoken a lot about the subject over the years. If you enjoy this piece, please consider browsing my full archive of about fifty episodes on my website, or by searching for “After On” in your favorite podcasting app. Frequent topics include robotics, neuroscience, synthetic biology, genomics, astrophysics, and a lot more.

This special edition of the Ars Technicast podcast can be accessed in the following places:

iTunes: (Might take several hours after publication to appear.)




[ufc-fb-comments url=""]

Latest Articles

Related Articles