I don’t really think that this is a very productive approach to the issue of ai ‘consciousness.’ Anthropic has demonstrated that several LLMs have a rudimentary ability to reflect on their internal state during inference. They are an undeniably interesting, literate technology that we don’t really fully understand being developed at an increasingly rapid rate
It’s not that I think LLMs are conscious, but I do see why a person might come to that conclusion. Calling them crazy, dumb, or unimaginative is kind of insulting. They are interacting with an alien stort of intelligence engineered to keep their attention.
It’s especially annoying when a lot of critics in the ai space are so smug about it. Many of those critics don’t like LLMs for legitimate reasons regarding their effect on employment, the environment, ai slop, art, etc. But, these valid issues are biases unrelated to ai ‘consciousness.’ If a lay person comes in with an unbiased (not good, just unbiased) perspective, they just see a very difficult to understand, literate computer program which seems to have destroyed the turring test. And, they get insulted by people for making a naive, but somewhat reasonable assumption that it is conscious.
I don’t really think that this is a very productive approach to the issue of ai ‘consciousness.’ Anthropic has demonstrated that several LLMs have a rudimentary ability to reflect on their internal state during inference. They are an undeniably interesting, literate technology that we don’t really fully understand being developed at an increasingly rapid rate
It’s not that I think LLMs are conscious, but I do see why a person might come to that conclusion. Calling them crazy, dumb, or unimaginative is kind of insulting. They are interacting with an alien stort of intelligence engineered to keep their attention.
It’s especially annoying when a lot of critics in the ai space are so smug about it. Many of those critics don’t like LLMs for legitimate reasons regarding their effect on employment, the environment, ai slop, art, etc. But, these valid issues are biases unrelated to ai ‘consciousness.’ If a lay person comes in with an unbiased (not good, just unbiased) perspective, they just see a very difficult to understand, literate computer program which seems to have destroyed the turring test. And, they get insulted by people for making a naive, but somewhat reasonable assumption that it is conscious.