I spent some time with ChatGPT last night, out of curiosity about the state of the art and how close we might be to runaway artificial general intelligence (AGI). I’m entirely underwhelmed at its inability to model conversational state, its incuriosity towards interlocutors, the obvious polite/inoffensive response templating, and just how far the AI industry appears to be from achieving anything remotely capable of passing for a human.
The rationalists are convinced we face a high risk of AGI going into runaway mode, where it relentlessly optimizes itself, establishing enough control over enough humans that in the large “we” (for some value of we) won’t actually be able to turn it off. Matthew Ward works an example through the lens of Facebook, saying something along the lines of “there’s a ghost AI embedded in Facebook’s myriad of recommendation systems that has financially motivated a bunch of people to prevent turndown: Continue reading AGI needs goals, and satisfying curiosity has to be the first