The problem with the entire conversation is that no one knows what consciousness really is and how it arises in humans.
If we had a slightly dumb consciousness produce text on prompt, how would it look different from what we have now?
What the current generation of LLMs lack, in my opinion are: an ability for metacognition (knowing what they don’t know), internal motivation, continuity of experience and agency.
Some of those will be difficult to solve, but I don’t think it’s impossible that this technology would yield truly thinking machines.
The problem with the entire conversation is that no one knows what consciousness really is and how it arises in humans.
If we had a slightly dumb consciousness produce text on prompt, how would it look different from what we have now?
What the current generation of LLMs lack, in my opinion are: an ability for metacognition (knowing what they don’t know), internal motivation, continuity of experience and agency.
Some of those will be difficult to solve, but I don’t think it’s impossible that this technology would yield truly thinking machines.