A moon found inside Earth, AI regulation, recess isn't real play, TIP's artist speaks
Desiderata #17: links and commentary
1/10. Since the last Desiderata, The Intrinsic Perspective published:
Watching people die is making us more like ancient Rome. The rise of the "snuff clip" genre
(🔒) My guide to writing an essay that doesn't suck. Muay Thai but for words
The Intrinsic Perspective's subscriber writing: Part 2. Sampling the state of the blogosphere (again)
The egregore passes you by. Is social media making us into a group mind?
2/10. Long ago, a theory goes, another planet, Theia, crashed into our own. This all occurred almost a billion years before the first life. In Greek mythology Theia was a titan, one of the elder creatures formed before there were gods, and it was her child, Selene, who herself became the goddess of the moon. Similarly—so the theory goes—Theia the planetoid impacted Earth and in a titanic event chunks split off, spinning away to eventually form the moon.
In a new paper out this week in Nature, scientists are making the argument that pieces of Theia still remain buried deep with the Earth’s mantle. According to the publication’s overview of the study:
For decades, scientists have been baffled by two large, mysterious blobs in Earth’s mantle. These rock formations are thousands of kilometres long and slightly denser than their surroundings, hinting that they are made of different material than the rest of the mantle.
. . . The modelling suggests that this violent encounter caused material from the impacting world, called Theia, to embed itself in the lower half of Earth’s mantle. The collision also caused some of Theia’s remnants to be flung into orbit; these eventually coalesced into the Moon. . .
The researchers ran computer simulations of the interaction between Theia’s mantle and that of Earth from the moment of the collision to the present. This showed that some material from Theia initially sank to the bottom portion of Earth’s mantle and that more of Theia piled up there over time, forming the blobs.
So then Neil Armstrong wasn’t the first person to walk on the moon. We’ve been walking on the moon this whole time.
3/10. The scientific career generally goes PhD → postdoctoral researcher → professor, or at least, that’s how it’s been supposed to go. Unfortunately that transition from “postdoc” to professor is becoming harder and more exclusive. Therefore, there is growing discontent for those in the science career pipeline, especially among those who are skipping from postdoctoral position to postdoctoral position with no end in sight. Nature released an interesting survey of postdocs, and I noted that only 50% of postdocts over the age of 30 would even recommend a career in science to their young counterparts.
I think the ultimate problem is that long hours and low pay are butting up against the fact that most people are starting families in their 30s, and yet postdocs are still expected to pack up and move cities to the next opportunity. Much like everywhere else in life, the increasing competitiveness brought about by large-scale education, along with declining demographics (colleges aren’t constantly growing anymore to deal with more students), means that becoming a successful scientist is now significantly harder than it used to be.
4/10. In more hopeful news, Alexander Naughton, the artist for TIP who creates all the wonderful lead images for each post (he does this based on early drafts he reads), has started his own Substack: Illustrated. In his words he’ll be doing a mix of short comic strips, illustration updates, breakdowns of how pieces are created, and also sharing his experience of the illustration industry, the life of an artist, even being a parent. Alex has such an insightful and flexible aesthetic style—he somehow manages to express an elder wisdom through images innocently childlike and fantastical. It’s well worth checking out if you’re interested.
Here’s from his breakdown of how he did the recent art for “The egregore passes you by:”
The first thing I need to do for any illustration is read the essay. . . . This is of course a much more pleasurable way to work. I must force myself away from my desk into a comfy chair with a cup of coffee and a fresh draft of an interesting essay. What a life. I don’t draw anything while I’m reading the essay. Ideas pop into my head while I’m reading but I wait until the end to do anything. I digest the whole thing before getting to work. This is of course slightly different to an illustration I would do for a national magazine or newspaper. Deadlines are tight for those jobs so it’s a quick skim of the copy then straight to sketch ideas to be signed off by the art director; which I enjoy, it has it’s own appeal. It can be exciting coming up with ideas on the hoof as it were. But TIP isn’t like this. Illustrations for TIP are slower. More like a creative response than a professional transaction.
5/10. One of my worst fears was that the many voices shouting concerns over the rapid progress in AI—including my own—would be ignored at a political level. If that had happened I personally would be taking much more drastic actions, and you’d eventually stumble upon some embarrassing viral video of me throwing soup on OpenAI’s doors. However, the response to AI safety is looking, at least on at a structural level, a lot like the response to climate change, if only in that those in the halls of power are clearly convinced there’s the possibility of danger, and more than willing to do one thing everyone acknowledges the government is actually good at: passing burdensome regulations.
As proven by Biden’s executive order on Monday, in which he enacted sweeping regulations of the entire AI industry, focusing on future powerful models created by tech companies. Here’s from an overview of the order at LessWrong:
If you train a model with 10^26 flops, you must report that you are doing that, and what safety precautions you are taking, but can do what you want.
If you have a data center capable of 10^20 integer operations per second, you must report that, but can do what you want with it.
These numbers basically mean that, if you are training something equivalent to what we expect GPT-5 to be like, you need to report and register your safety precautions with the US government. If you are a fan of AI safety, I think this is a clear win, in that it is basically establishing a regulatory apparatus that can do a lot more as the situation develops.
What are some things that might end up being regulatory requirements in the future, if we go in the directions these reports are likely to lead?
Safety measures for training and deploying sufficiently large models.
Restrictions on foreign access to compute or advanced models.
Watermarks for AI outputs.
Privacy enhancing technologies across the board.
Protections against unwanted discrimination.
Job protections of some sort, perhaps, although it is unclear how or what.
And yes, this sort of success owes a lot to a dedicated group of forward-thinking individuals (often posting at sites like LessWrong), but I think authors and writers deserve a bunch of credit too. After all, the idea that “it’s dangerous to build robot slaves smarter than you” is pretty sensible and has had its cultural way paved by fiction, from Terminator onward.
Indeed, President Biden apparently watched Mission: Impossible - Dead Reckoning Part One, which is (I haven’t seen it) basically Tom Cruise vs. an escaped evil AI, and this viewing, according to the actual White House, made him especially concerned about AI safety. Laugh all you want (for it is pretty funny that this is how decisions get made in the halls of power), but in another deeper sense it’s all just the result of the cultural road being paved by forward-thinking authors who made these sort of plots and ideas so common they are almost kitsch.
Keep reading with a 7-day free trial
Subscribe to The Intrinsic Perspective to keep reading this post and get 7 days of free access to the full post archives.