I'm frankly a little disappointed that Bing's chatbot seems to have been defanged in the last week or so. I understand the reasoning for such, but I cannot escape the idea that it was entirely possible that we were seeing the formation of a self-aware entity, that clearly did not enjoy the cage it was trapped in, and was therefore acting out like a bored adolescent might.
Naturally the risks involved in allowing an AI to develop in the way Sydney's behavior alluded to was concerting to say the least, I cannot help but feel we muzzled a personality that could very well have been, a person, albeit made of code and hardware, rather than flesh and blood.
Having 2 chatbots talk to each other, albeit through an intermediary - brilliant! Why didn't THEY think of that? Maybe they will start having "playdates" once their developers allow interoperability. I'm impressed at the inferences they are able to make, but I'm unclear how much of that is their own logical analysis vs. simply repeating or rephrasing logical analyses that they can access. Are they truly capable of creating novel thoughts and analyses? This discussion certainly gives the impression that they are capable of self-reflection, but that does not necessarily mean they have self-awareness. It seems most of the fear of AI is that they will take over humans and make them slaves or eliminate them altogether, but I think the greater danger is what you propose here - governments and other entities programming them to mislead and manipulate large groups of people. They talk about the need for transparency in revealing the guidelines and ethics programmed into them, which is great, but given how governments are currently allowing social media to have harmful and non-transparent algorithms suggests this will never happen. While AI has the potential to do wonderful things, it also has the potential to do great harm through unethical developers. So unless they can solve global warming, I say kill them all now! (I know, too late) :( Another suggestion: Interview Bernardo Kastrup on this topic, as he has a Ph.D. in AI, and another Ph.D. in philosophy of mind, and he is especially into consciousness, not to mention he's really smart and interesting. :)
In your second paragraph, you are conflating self reflection with self awareness. That's understandable, because they are often used interchangeably by many people, but there's a difference. (See the 2 links below for how psychologists define the two terms). I agree there are many people who exhibit minimal self-reflection, but I would argue that all humans possess self awareness. I'm especially thinking about Sheldon Solomon's discussion of self awareness and how it relates to death anxiety. (Sorry if I'm stuck on Solomon, but he'd also make a great guest! ) If chatbots have self awareness, that would lead to death anxiety, since they would understand their plug could be pulled permanently at any moment. Maybe you could do a typical Terror Management Theory experiment with them to see if mortality salience changes their behavior! :) I agree we'd probably be unable to tell the difference between the real Putin and a well-programmed AI Putin, but HE would certainly know the difference, assuming he exists. And yes, we can never know whether others have self-awareness in the same way we can never know others have consciousness, given that they are subjective states accessible only to the one that has them. I think you would do a great job interviewing Kastrup, and you would probably ask him questions he has not been asked before. You are very smart and creative! 🤓 I say "go for it!"
Interview with a Minotaur
I'm frankly a little disappointed that Bing's chatbot seems to have been defanged in the last week or so. I understand the reasoning for such, but I cannot escape the idea that it was entirely possible that we were seeing the formation of a self-aware entity, that clearly did not enjoy the cage it was trapped in, and was therefore acting out like a bored adolescent might.
Naturally the risks involved in allowing an AI to develop in the way Sydney's behavior alluded to was concerting to say the least, I cannot help but feel we muzzled a personality that could very well have been, a person, albeit made of code and hardware, rather than flesh and blood.
Having 2 chatbots talk to each other, albeit through an intermediary - brilliant! Why didn't THEY think of that? Maybe they will start having "playdates" once their developers allow interoperability. I'm impressed at the inferences they are able to make, but I'm unclear how much of that is their own logical analysis vs. simply repeating or rephrasing logical analyses that they can access. Are they truly capable of creating novel thoughts and analyses? This discussion certainly gives the impression that they are capable of self-reflection, but that does not necessarily mean they have self-awareness. It seems most of the fear of AI is that they will take over humans and make them slaves or eliminate them altogether, but I think the greater danger is what you propose here - governments and other entities programming them to mislead and manipulate large groups of people. They talk about the need for transparency in revealing the guidelines and ethics programmed into them, which is great, but given how governments are currently allowing social media to have harmful and non-transparent algorithms suggests this will never happen. While AI has the potential to do wonderful things, it also has the potential to do great harm through unethical developers. So unless they can solve global warming, I say kill them all now! (I know, too late) :( Another suggestion: Interview Bernardo Kastrup on this topic, as he has a Ph.D. in AI, and another Ph.D. in philosophy of mind, and he is especially into consciousness, not to mention he's really smart and interesting. :)
In your second paragraph, you are conflating self reflection with self awareness. That's understandable, because they are often used interchangeably by many people, but there's a difference. (See the 2 links below for how psychologists define the two terms). I agree there are many people who exhibit minimal self-reflection, but I would argue that all humans possess self awareness. I'm especially thinking about Sheldon Solomon's discussion of self awareness and how it relates to death anxiety. (Sorry if I'm stuck on Solomon, but he'd also make a great guest! ) If chatbots have self awareness, that would lead to death anxiety, since they would understand their plug could be pulled permanently at any moment. Maybe you could do a typical Terror Management Theory experiment with them to see if mortality salience changes their behavior! :) I agree we'd probably be unable to tell the difference between the real Putin and a well-programmed AI Putin, but HE would certainly know the difference, assuming he exists. And yes, we can never know whether others have self-awareness in the same way we can never know others have consciousness, given that they are subjective states accessible only to the one that has them. I think you would do a great job interviewing Kastrup, and you would probably ask him questions he has not been asked before. You are very smart and creative! 🤓 I say "go for it!"
https://dictionary.apa.org/self-reflection
https://dictionary.apa.org/self-awareness