Hot take on OpenAI's new GPT-4o
GPT-4o hot take:
* The speech synthesis is terrific, reminds me of Google Duplex (which never took off).
but
* If OpenAI had GPT-5, they have would shown it.
* They don't have GPT-5 after 14 months of trying.
* The most important figure in the blogpost is attached below. And the most important thing about the figure is that 4o is not a lot different from Turbo, which is not hugely different from 4.
* Lots of quirky errors are already being reported, same as ever, like this reasoning error from Jane Rosenzweig:
and this "hallucination" from Benjamin Riley:
* OpenAI has presumably pivoted to new features precisely because they don't know how produce the kind of capability advance that the "exponential improvement" would have predicted.
* Most importantly, each day in which there is no GPT-5 level model-from OpenAI or any of their well-financed, well-motivated competitors--is evidence that we may have reached a phase of diminishing returns.
Gary Marcus greets you all from the Starmus Festival in Slovakia.
Marcus on AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.