I'm halfway through this article and I like the points made thus far, which I very much agree with.
https://prospect.org/power/2025-03-25-bubble-trouble-ai-threat/
But I want to have an aside on the level to which people uncritically use the term "foundation models" and discuss "reasoning" of these models, when it is very likely that the models literally memorized all these benchmarks. It truly is like the story of the emperor's clothes. Everyone seems to be in on it and you're the crazy one being like but HE HAS NO CLOTHES.
There is no difference between the likes of Stanford and any of these companies, they're one and the same. So schools like it make money from the hype and will perpetuate the hype.
The McKinseys and other huge consulting orgs are raking in bank on the hype, akin to all the people who made money during the gold rush, except for the people looking for gold.
All the things people call "laws" aren't laws and were never "laws". Scaling laws? Some people looked at some plots and came up with that.
"Emergence"? Take a look at this paper showing how that is nothing but hot air: .
https://proceedings.neurips.cc/paper_files/paper/2023/hash/adc98a266f45005c403b8311ca7e8bd7-Abstract-Conference.html
-"Reasoning"? Lets set aside how they don't even have a definition for this. But literally change some minor thing on the benchmarks like a number, and you see how these models completely fail. https://arxiv.org/pdf/2410.05229
-"Understanding"? Just watch this debate to see the rigor with which Emily discusses the topic vs those who make these wild claims: https://lnkd.in/e6bgM-43.
If you come up with a new benchmark they'll just guzzle it as part of the training data and then claim to do "reasoning" on that.
It is so mind-boggling to me that people have to even spend time debunking these claims. Such a waste of resources that could be going to doing actual science and engineering work.
@timnitGebru First, a huge personal thanks for raising the red flags on LLMs and the creepy TESCREAL ideology they're based on, long before most people realised it was an issue.
I know there's been a big professional and personal cost to that. People are listening, we really appreciate your work and advocacy.
Second, the LLM wave has to be one of the greatest misallocations of resources in human history.
We're in the middle of a climate crisis. There's massive housing shortfalls in many developed countries. Huge investments are needed into health, education, and public transport.
And there are far more critical things we could be researching that would unlock far more benefit to the public.
Heck, given the growing issue of disinformation, there'd arguably be far more public benefit from using these resources to make news journalism from reputable outlets free.
I don't see how investing billions into chasing a few billionaires' TESCREAL fantasies is the best use of our resources at this time.
@ajsadauskas @timnitGebru AI is receiving a huge amount of investment for a simple reason: they enable the *control* of huge numbers of people cheaply and at scale. In other words, they are instruments of power, and in this soon-to-be post-democratic and post-capitalist world, power is only game that matters.
Molding opinion via manipulation of social media. Fine-tuned propaganda at scale.
Detection and suppression of opposition movements. Detailed tracking of every aspect of your life. Monitoring and control of factory workers. Finding and punishing dissidents. Automated weapons of war.
Forget the blather about AGI, limitless abundance, and the Singularity. Pay attention to the man behind the curtain. Power is their only goal.
@zenkat @ajsadauskas @timnitGebru as almost any technology, AI may be used for the good or the bad.
But being used for bad purposes, its damage may be disastrous.