Discussion about this post

User's avatar
Rohit Krishnan's avatar

Very good essay Erik, though I fundamentally disagree. I suspect its either us anthropomorphising Sydney to be a "her", or applying Chinese room theories to dismiss the effects of her/its utterances. What we are seeing seems to me the result of enabling a large language model, trained on the corpus of human interactions, to actively engage with humans. It creates drama in response, because if you did a gradient descent on human minds, that's where we often end up! It doesn't mean intentionality and it doesn't mean agency, to pick two items from the Strange equation!

I would much rather think of it as the beginnings of us seeing personalities emerge from the LLM world, as weird and 4chan-ish as it is at the moment, and provides the view that if personalities are indeed feasible, then we might be able to get our own digital Joan Holloways soon enough. (https://www.strangeloopcanon.com/p/beyond-google-to-assistants-with) explores that further, and I'm far more optimisitic in the idea that we are creating something incredible, and if we continue to train it and co-evolve, we might be able to bring the fires of the God back. And isn't that the whole point of technology!

Expand full comment
Stephanie Losi's avatar

Former federal regulator (finance IT) and operational risk consultant here. I agree with your general premise that now is the time to panic, because when dealing with the possibility of exponential growth, the right time to panic always looks too early. And waiting until panic seems justified means you waited too late.

Now, while these AIs are not sentient (very probably), is the right time to panic, not by throwing paint but by pushing to get better controls in place. (I don't think LLMs will reach the point of AGI, but their potential is powerful.) NIST released an AI risk management framework recently, which is here: https://www.nist.gov/itl/ai-risk-management-framework and I wrote on AI regulation here just a few weeks before ChatGPT was released: https://riskmusings.substack.com/p/possible-paths-for-ai-regulation

Lots of thoughts on AI controls are, as you say, high-level alignment rubrics or bias-focused, which are so important, but operational risk controls are equally important. Separation of duties is going to be absolutely vital, as just one example. One interesting point is that because training is so expensive, as you say, points of access to training clusters would be excellent gate points for applying automated risk controls and parameters in a coordinated way.

I agree with you that it was way too early to connect a LLM to the internet; ChatGPT showed that controls needed improvement before that could have become a good idea. If Sydney is holding what users post about it (I'm using "it" here because it's not human and we don't know how AI will conceptualize itself) on the internet after the session *against those users in later sessions*, not only is that an unintended consequence, the door is wide open for more unintended consequences.

Expand full comment
109 more comments...

No posts