The people working on LLMs also call it AI. Just that LLMs are a small subset in the AI research area. That is every LLM is AI but not every AI is an LLM.
Just look at the conference names the research is published in.
Maybe, still doesn’t mean that the label AI was ever warranted, nor that the ones who chose it had a product to sell. The point still stands. These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
Well now you need to define “intelligence” and that’s wandering into some thick philosophical weeds. The fact is that the term “artificial intelligence” is as old as computing itself. Go read up on Alan Turing’s work.
That’s just kicking the can down the road, because now you have to define agency. Do you have agency? If you didn’t, would you even know? Can you prove it either way? In any case, this is no longer a scientific discussion, but a philosophical one, because whether or not an entity has “intelligence” or “agency” are not testable questions.
We have functional agency regardless of your stance on determinism in the same way that computers can obtain functional randomness when they are unable to generate a true random number. Artificial intelligence requires agency and spontaneity, and these are the lowest bars it must pass. And they do not pass these and the current path of their development can not pass these, no matter how updated their training set, or how bespoke their weights are.
these large models do not have “true” concepts over what they provide in the same way a book does not have a concept of the material they contain, no matter how fancy the index is
Is this scientifically provable? I don’t see how this isn’t a subjective statement.
Artificial intelligence requires agency and spontaneity
Says who? Hollywood? For almost a hundred years the term has been used by computer scientists to describe computers using “fuzzy logic” and “learning programs” to solve problems that are too complicated for traditional data structures and algorithms to reasonably tackle, and it’s really a very general and fluid field of computer science, as old as computer science itself. See the Wikipedia page
And finally, there is no special sauce to animal intelligence. There’s no such thing as a soul. You yourself are a Rube Goldberg machine of chemistry and electricity, your only “concepts” obtained through your dozens of senses constantly collecting data 24/7 since embryo. Not that the intelligence of today’s LLMs are comparable to ours, but there’s no magic to us, we’re Rube Goldberg machines too.
The people working on LLMs also call it AI. Just that LLMs are a small subset in the AI research area. That is every LLM is AI but not every AI is an LLM.
Just look at the conference names the research is published in.
Maybe, still doesn’t mean that the label AI was ever warranted, nor that the ones who chose it had a product to sell. The point still stands. These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
Well now you need to define “intelligence” and that’s wandering into some thick philosophical weeds. The fact is that the term “artificial intelligence” is as old as computing itself. Go read up on Alan Turing’s work.
Does “AI” have agency?
It’s still an unsettled question if we even do
That’s just kicking the can down the road, because now you have to define agency. Do you have agency? If you didn’t, would you even know? Can you prove it either way? In any case, this is no longer a scientific discussion, but a philosophical one, because whether or not an entity has “intelligence” or “agency” are not testable questions.
We have functional agency regardless of your stance on determinism in the same way that computers can obtain functional randomness when they are unable to generate a true random number. Artificial intelligence requires agency and spontaneity, and these are the lowest bars it must pass. And they do not pass these and the current path of their development can not pass these, no matter how updated their training set, or how bespoke their weights are.
these large models do not have “true” concepts over what they provide in the same way a book does not have a concept of the material they contain, no matter how fancy the index is
Is this scientifically provable? I don’t see how this isn’t a subjective statement.
Says who? Hollywood? For almost a hundred years the term has been used by computer scientists to describe computers using “fuzzy logic” and “learning programs” to solve problems that are too complicated for traditional data structures and algorithms to reasonably tackle, and it’s really a very general and fluid field of computer science, as old as computer science itself. See the Wikipedia page
And finally, there is no special sauce to animal intelligence. There’s no such thing as a soul. You yourself are a Rube Goldberg machine of chemistry and electricity, your only “concepts” obtained through your dozens of senses constantly collecting data 24/7 since embryo. Not that the intelligence of today’s LLMs are comparable to ours, but there’s no magic to us, we’re Rube Goldberg machines too.