Why Altman really got fired, Genghis Khan's descendants, marriage & politics, Starship launch
Desiderata #18: links and commentary
The Desiderata series is a regular roundup of links and thoughts, as well as an open thread and ongoing AMA in the comments for paid subscribers (along with extra links).
1/12. Since the last Desiderata, The Intrinsic Perspective published:
Nerd culture is murdering intellectuals. Nerdom is great. . . but at what cost?
(🔒) Why I keep turning down The New York Times. Reasons for writing independently.
Osama bin Laden's TikTok popularity is based on childish notions of evil. Bad people can argue their case too
The risk of another consciousness winter. Yes, science does affect culture.
2/12. It’d be impossible not to mention the dramatic Succession-esque news that captured my entire X feed (occurring shortly after the last Desiderata, so I haven’t touched on it yet) on how Sam Altman was abruptly fired for unclear reasons by the board of OpenAI. In turn, the rest of the company threatened to quit (perhaps out of great loyalty to Sam, perhaps because all of them stand to be millionaires if the company is successful, perhaps game theoretic logic that Sam was destined to win anyways, perhaps they believe he is the best person to shepherd AGI into existence—it was likely a mix). Sam is back in, at least for now.
I won’t overview the entire brouhaha, but I want to point out that, despite everyone’s initial assumption that the board’s decision was a frantic reaction to some hidden revolutionary AI advancement, I am skeptical on that front, and there is a clear alternative answer. The board specifically said it had to do with his communication not being “candid.” Even the interim CEO Emmett Shear publicly said:
Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models.
News organizations, of course, want to give the people what they want, so have dug everywhere for a framing wherein it was Sam Altman vigorously pushing ahead AI capabilities unsafely and that the board, specifically the self-identifying effective altruists, had tried to stop him. The New York Times latched onto a research paper written by one member of the board, Helen Toner, that criticized aspects of OpenAI’s approach for not being safety-conscious enough, and that she and Sam apparently had clashed over in the weeks before.
Reuters, in turn, reported that it was concern over a new AI approach called “Q*,” which involved advances in mathematical reasoning, and that the firing had been secretly triggered by an internal letter that was written about it.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.
Who are their sources for this Q? All we get is:
two people familiar with the matter told Reuters.
What counts as “familiar with the matter?” Who knows! Only they know. Because:
Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
One can read conspiracy into this, or one can read a game of telephone where people are talking to Reuters with second or third-hand information about some previous research. In fact, it could all just be referring to a blog post by OpenAI about improving mathematical reasoning published back in May to little attention.
Sam Altman recently described the Q* rumors as an “unfortunate leak” but managed to say nothing about why it was “unfortunate”—or if it was even anything specific. Sam is well-known for his ability to both mysteriously hype OpenAI’s achievements while not committing any specific information about what they’re developing, so you could read that as either a direct confirmation of a massive breakthrough or a mere labeling of something as “unfortunate” and a coy avoidance of what is ultimately a big nothing burger.
Finally, The New Yorker managed to present a clearer and simpler explanation: the board felt that Sam was manipulating, lying, and playing them off each other, and lost trust in him. Basically, the board felt they caught him in a lie (which fits with exactly what the board said regarding “communication”).
“He’d play them off each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like this had been happening for years.”
This is almost certainly the true explanation of the situation. The firing wasn’t about effective altruism, it was about what happens when an adult in a professional setting straight-up lies to you, and you corroborate that with others. When you catch them red-handed you are immediately put into an extreme position of totally distrusting this person about everything. That sort of pressure would explain precisely the board’s abruptness, their curtness, and their inability to justify their own actions, since it would require essentially saying that “Sam Altman is a snake” and they were willing to hint at that but not say it outright in public.
And again, maybe it was all just a misunderstanding! Or a series of misunderstandings dating back years. Maybe Sam Altman is innocent. Most of the time, in adult life, people learn not to lie. You lie when you’re a child because you think you can manipulate the situation, you lie because you think you’re so much smarter than everyone else. But some people never lose that, they just get better at it. If a group of professionals accuse another professional of straight-up lying and manipulation, and the stakes are this high, that’s very concerning. And, again, just being objective, outside of leaders of nations Sam Altman is now one of the most powerful people alive. Journalists don’t write critical articles about him (or, absolutely nowhere near as critical as they could) because outlets are afraid of getting sued, or losing access. He inspires vast loyalty, so much so that he is essentially unfireable, as we just learned. He has perhaps the best set of social and business connections in the world. Again, maybe the board was wrong about the accused manipulation(s), maybe Sam Altman is innocent, etc, but it’s just true on its face that far more concerning that some hidden Q* secret breakthrough is the idea that one of the most powerful and important companies in the world is being led by a sociopath.
3/12. The Washington Post had an opinion piece arguing that rising political polarization has influenced the decline in marriage and coupling off in the 18-30 bracket.
It is pretty strange to think that, at least for the early ages like here, the genders in the United States are effectively already polarized between the political parties, with women accounting for a mere ~15% of the 18-30 cohort who identify as conservative, compared to almost ~50% who self-identify as liberals. There are obviously reasons for this, I’m not saying it’s inexplicable or anything, but given the gender segregation by party, one wonders about the reverse of the Post’s hypothesis: to what degree are poor gender relations driving increasing political politicization, rather than the other way around?
4/12. Speaking of terrible gender relations, you might have heard that around 10% of Central Asian men are direct descendants of Genghis Khan (and if you hadn’t, well, surprise! It’s a lot). What I didn’t know is that there are a few other examples of this as well, in fact, there’s even a term for it: star-clusters of chromosomes. It’s called a star-cluster because what is usually a distributed genetic network suddenly becomes highly centralized around a single hub in the network as the same male has hundreds of offspring, who in turn have thousands upon thousands of descendants. According to Razib Khan’s recent overview and explainer:
A 2015 paper reported the emergence of several star clusters around 4,000 years ago, likely associated with polygynous social structures. We may never know who these people were because these expansions occurred beyond the purview of literate civilization, but it seems clear that Genghis Khan had many prehistoric forerunners. He may in fact have been among the last of his kind, not a singular exception.
5/12. This is a reminder that Alexander Naughton, who does all the art for TIP, is open for commissions. E.g., he just did the artwork for the new album, Love Monster, by Job Creators (full disclosure: it’s my friends’ band and I go see them live pretty regularly).
You can contact Alex about commissions here. And you can follow Job Creators on Instagram here if you like their kind of music, which is best described as (from a review) “new wave texture and groove with math rock, indie, and psychedelic prog.”
6/12. A recent educational study published in the Proceedings of the National Academy of Sciences came to the astonishing conclusion (they literally use the term “astonishing” in their paper title) that students don’t really learn at different rates. They just have vastly different starting knowledge, and therefore this creates the illusion of different rates of learning. As KQED described it:
as the scientists began their study, they stumbled upon a fundamental problem: they could not find faster learners. After analyzing the learning rates of 7,000 children and adults using instructional software or playing educational games, the researchers could find no evidence that some students were progressing faster than others. All needed practice to learn something new, and they learned about the same amount from each practice attempt. On average, it was taking both high and low achievers about seven to eight practice exercises to learn a new concept, a rather tiny increment of learning that the researchers call a “knowledge component.”
. . . as the scientists confirmed their numerical results across 27 datasets, they began to understand that we commonly misinterpret prior knowledge for learning. Some kids already know a lot about a subject before a teacher begins a lesson. They may have already had exposure to fractions by making pancakes at home using measuring cups. The fact that they mastered a fractions unit faster than their peers doesn’t mean they learned faster; they had a head start.
A radical claim. Listen, if someone maintains there is no difference—zip, zero, zilch—between the learning rates of any two pairs of human beings whatsoever, I think that’s a far too strong claim and just not true. But in practice I think negative results like this are good to keep in mind to build a clear picture of how human intelligence actually works, and how complex it actually is, and how subtle it can be.
While it’s been a longstanding assumption in some areas of psychology that there is a crucial difference between “fluid” and “crystallized” intelligence, think about how reductively unlikely that is. Is there really a fundamental biological/structural/neural/whatever difference between how smart you are and what knowledge you have? Is that a clear definable difference that actually exists and the two separate into natural kinds? Or do the two blur and overlap so much that, while you can pretend to separate them, you’re just teasing partially apart a thing that’s actually blurry and overlapping and without clear borders.
7/12. Bitcoin, without much press at all, has quietly sneaked past $40,000.