I don't see we can ever get to a point where these models are sort of conscious or know what they're thinking about or are able to take on these types of challenging tasks where you have a CEO, neural network or something.
(from: Interviews with AI Researchers)
There is ongoing debate about whether artificial intelligence (AI) can ever be conscious in the sense of having subjective experience. However, it’s far from certain that this debate is in any way relevant to advanced general intelligence (AGI), since it’s unresolved whether consciousness is a necessary condition for AGI.
Regardless of whether AI systems may possess consciousness, it is important to consider the potential risks that may arise from their advanced capabilities. These risks may be significant enough to warrant discussion, and may include existential risk for humanity.
In other words: our concern about the potential risks of advanced AI systems does not depend on the assumption that they are conscious. Rather, we are focused on the potential consequences of their behavior and capabilities.