Siblings aren't always 50% related, Congress on AI, education in 1912, dog turns 31
Desiderata #12: links and commentary
The Desiderata series is a monthly roundup of links and thoughts, as well as an open thread and ongoing AMA in the comments for paid subscribers (along with a few extra links).
1 of 11. Since the last Desiderata, The Intrinsic Perspective published:
Why is my novel a hit in Italy but not the United States? Is it luck? Culture? Or some secret third thing?
Elon Musk's Substack ban was needless drama. Where did the adults go? On the petulance of our public figures
(🔒) I owe my career to the SAT: Now 80% of colleges are making their decisions without it
(🔒) The Intrinsic Podcast #2: Traditional publishing is dying. Guest Elle Griffin researches writing online
Your IQ isn't 160. No one's is. Stratospheric IQs are like leprechauns, unicorns, or mermaids
2 of 11. Earlier this year there was a paper in Nature examining the degree of genetic variation between siblings and comparing that to random pairs of the population. What they found was that:
Siblings are more similar to each other than randomly selected individuals in the population, but there is still significant variation within each family.
While it’s commonly said that you always share 50% of your DNA with each parent and your siblings, actually the sharing of DNA between siblings can vary substantially along a distribution. Here’s a graph from another paper, tracking the degree of genetic similarity between siblings:
While the average is 50%, some siblings might only be 38% genetically identical, while others are 62%! That means some sibling pairs are almost twice as genetically similar as others. Sound like anyone you know, on either side of the spectrum? Kind of explains a lot, huh?
3 of 11. Here’s a test from 1912 designed for 8th grade public schools in Kentucky.
While I think contemporary 8th graders might do okay on the above math, other parts of the test, like history, sure seem quite advanced by today’s standards for public schools in Kentucky, which scores low to middle of the pack in terms of states on education ratings.
My guess is that this would be substantially difficult for most modern 8th graders anywhere in America, including most private schools. Make of that what you will.
4 of 11. Speaking of, I do think it’s becoming clear that early internet access and smartphone usage in children leads to poor mental health outcomes in young adults. In my opinion, smartphone usage in general has been a major contributor to our society and culture becoming more depressed, anxious, neurotic, and overall unpleasant. While proving causation is very difficult (think of the decades-long debate over cigarette smoking and lung cancer), research continues to support the hypothesis. E.g., Jonathan Haidt’s Substack has a breakdown of a new study showing how the earlier someone gets a smartphone, the worse their mental health as a young adult is—a trend that particularly effects young women.
Since the most common cause of kids getting smartphones is that their friends get them, the whole thing is a tragedy of the commons wherein everyone is forced into the behavior of the parents who buy their kids smartphones extremely early. If causation gets firmly established, I think it would be legitimate to start exploring smartphone bans at very young ages (e.g., perhaps until the age of 10 or 13) in order to avoid the “but all my friends have them” effect.
5 of 11. What would Earth, and its culture, look like to alien observers? In a new study, researchers found that this would depend where their own star system was located:
From HD 95735—a red dwarf located 8.3 light-years away—meanwhile, the main contribution of these signals would come mainly from the east coast of China, followed by the west and east coasts of North America.
Finally, from Alpha Centauri A—located around 4.2 light-years away—the primary contribution would come mainly from the west of Asia and Central Europe, with East Africa and Australia also making significant contributions.
I didn’t know there was such a strong location effect—it’s pretty interesting that you might get a very different view of the planet and our culture depending on precisely where you were listening in from! I can think of a few fun sci-fi scenarios based on that.
6 of 11. We’re racing toward the publication of my new book, The World Behind the World—out July 25th! It’s many things, but above all it’s my scientific magnum opus. It covers how humanity has understood consciousness from the ancient Egyptians to William James, gives a tour of the problems and paradoxes of the modern neuroscience of consciousness, proposes a theory of why you have free will, and gives an argument for why science is necessarily incomplete. If you buy it, you’re getting a lot of book for your book, I promise: it’s not too long, it has a bunch of original research and my own thinking, and I also worked hard on making it a solid beach read. It’ll be available at bookstores near you, but the easiest and best way to get it is to preorder it—all preorders count as sales for the first week, so it helps the book (and me) the most to preorder it now (at Amazon, and you can also find a local bookstore to preorder from).
7 of 11. On Tuesday there was the first congressional hearing about AI safety and regulation of AI companies.
The good news is that it’s increasingly likely that there will be governmental action on this: e.g., some AI regulatory agency will be established. The majority of attending senators came across as engaged and interested. In fact, it was those called to testify who were the most disappointing. While Sam Altman, the head of OpenAI (now effectively owned by Microsoft) did call for regulation, all the discussion of regulation and concerns were based around things like deep fakes, AI’s threat to democracy, AI’s threat to jobs, etc, and none were based around the risks of creating a new race of entities who will eventually, perhaps inevitably, be more intelligent than us—let alone whether humanity has a high likelihood of being around a century after we do that (no, in my opinion, along with many others).
This lack was not because those called to testify were not given an opportunity to talk about existential risk. In fact Sam Altman avoided the topic when directly asked. He did exactly what I would expect of a CEO in an industry that has the potential to kill everyone: he lied to Congress (or at least, he lied by omission). The moment happened when he was asked to specifically name his own personal “nightmare scenario” for the risks of AI. From the question, it was clear what he was being prompted to talk about: the risk to humanity as a whole. The senator even read to Sam a previous quote from Sam himself that “superhuman machine intelligence is probably the greatest threat to humanity.” He didn’t read the full quote, which is from Sam’s old blog. Here’s the expanded quote:
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared. . . SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans. . . How can we survive the development of SMI? It may not be possible.
While the senator said his own worry was over job loss, and asked whether job loss is the threat to humanity that Sam meant (clearly no) the question was a prompt for Sam to discuss existential risk—at absolute minimum, it was an opportunity to clarify just what Sam believes. However, Sam decided (this is 38 minutes in) to dodge the question, answering in corpo-legalese only about the senator’s worst fear, which is job loss, rather than his own.
Gary Marcus, who was also testifying, pointed out Sam’s original answer avoided the question. Sam then proceeded to dodge the question again. Instead of being honest about the worst-case risk, he repeated his earlier reference to jobs and then vaguely spoke about “harms to the world,” adding that OpenAI was “clear-eyed about the downsides”—downsides he won’t even bother to mention in front of Congress. After being quoted his own blog about how AI was the greatest threat to humanity.
In fact, in his personal life Sam himself is a known prepper for the AI apocalypse. In an old 2016 New Yorker interview, Sam outlined how one his main hobbies was prepping for survival scenarios, partly from fear, in his own words, of “AIs that attack us.” He said:
I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.
But when Sam testified in front of Congress, there was no specific mention of the existential risk of AI, even when he was prompted twice with a quote from his own writing about the subject. No mention of guns or gold. No gas masks. No patch of land in Big Sur. As others have pointed out, this is Don’t Look Up levels of double talk—the CEO of the leading AI company believes the very technology he’s building is the greatest threat to humanity and could kill everyone (and personally has a billionaire-only survival arc for when that happens), but he’s unwilling to express this when it counts in front of Congress.
At the moment on the national stage, the moment where it mattered most, the moment in which Sam Altman could have said “AI is more like regulating nuclear weapons than regulating cars, there are existential risks in engineering entities smarter than us” he simply didn’t, even when asked. Instead he did what we should all expect CEOs of leading AI companies to do, which is lie by omission about the longterm consequences of their work, consequences they themselves foresee. Because Sam Altman knows that if members of Congress knew what Sam Altman knows they would red-tape his industry back to the stone age. And he likes his job. He really likes his job.