The problem with the entire conversation is that no one knows what consciousness really is and how it arises in humans.
If we had a slightly dumb consciousness produce text on prompt, how would it look different from what we have now?
What the current generation of LLMs lack, in my opinion are: an ability for metacognition (knowing what they don’t know), internal motivation, continuity of experience and agency.
Some of those will be difficult to solve, but I don’t think it’s impossible that this technology would yield truly thinking machines.
A lot of ten-year-olds wouldn’t pass if they were subjected to blind turing test against a modern LLM. Heck, even three-year-olds are cpncious and they cannot obviously pass the turing test as it is defined.
The problem with the entire conversation is that no one knows what consciousness really is and how it arises in humans.
If we had a slightly dumb consciousness produce text on prompt, how would it look different from what we have now?
What the current generation of LLMs lack, in my opinion are: an ability for metacognition (knowing what they don’t know), internal motivation, continuity of experience and agency.
Some of those will be difficult to solve, but I don’t think it’s impossible that this technology would yield truly thinking machines.
A lot of ten-year-olds wouldn’t pass if they were subjected to blind turing test against a modern LLM. Heck, even three-year-olds are cpncious and they cannot obviously pass the turing test as it is defined.