Discussion about this post

User's avatar
M Flood's avatar

The state we are in right now is what Ethan Mollick has called the "jagged frontier", where the large-language models are substantially better than the average human at many tasks, and substantially worse than the average human at many others, and you cannot tell without experimentation.

I'm glad you are actually using an AI system to try (if unsuccessfully) a use case. And I'm especially glad to see that you tried multiple prompts. Too much AI scorn and hype comes from single attempts, without either experimentation to improve performance, or repeated trials to see whether the result can be achieved consistently. It's very much an in progress technology; I wouldn't want to make a substantial bet on what it can or cannot do 18 months from now.

Expand full comment
Doc Herb's avatar

The problem with Gen AI is that people keep getting lulled into thinking they are working with a thinking, knowledgeable system. They keep forgetting how Gen AI works. It's only a prediction machine. It's predicting what based on its algorithms the most likely next word. When it gets it right or close to right we are amazed. When it gets it wrong we call it an "hallucination". In reality it's always hallucinating. We just like the results a lot of times. We have to keep it mind that even though it uses words to try and reassure us it does not truly understand. It's just predicting and repeating based on what has been previously written. So what good prompters do is figure out how to make it predict better, not make it understand.

Expand full comment
92 more comments...

No posts