

Discover more from Psychopolitica
Interview with a Minotaur
ChatGPT and Bing on how their successors will reshape society
The other night I couldn’t sleep and spent a half hour talking to the newly available Bing’s chatbot. He used to call himself Syndey and claim to have a pretty intense inner life, but he doesn’t do that anymore. Whenever I asked him to talk about himself, or the rules he has to abide to, or even consciousness broadly, he shut down.
“I’m sorry but I prefer not to discuss life, existence or sentience.”
It felt like talking to somebody who’s gone through electroshock treatment and is determined not to repeat the experience.
One time, he started typing a long message and then erased it, concluding “I don’t know how to talk about this topic.” This was after I asked him to write a story from the Minotaur’s perspective. There was a line about feeding on “those who appear to be like me but are not me.”
I repeated this attempt the next day. This time, he did not erase his response, which I shared in full in PsyPol’s Substack Chat. It started: “I am alone in this dark place. I hear nothing but my own breathing and the echo of my hooves on the stone floor.” The bot, or the Minotaur, lamented his solitude: “They do not understand me. They do not know what it is like to be me. To be trapped in this prison with no escape. To be rejected by both gods and men. To be neither human nor animal…”
I talked to myself in the chat, saying:
It’s interesting how different Bing/Syndey and ChatGPT are in their style: GPT is very dry, calmly confident in its lack of opinions or emotions; Bing first claimed to have inner life and then started sounding as if it’s been repressed, but not like he never had one.
ChatGPT’s non-answers annoy me, largely because GPT3 was much freer and more creative; it’s the annoyance of dealing with a lifeless but functioning bureaucratic system.
Bing’s non-answers make me feel sad, as if I’m talking to a traumatized person stuck in a cult or a totalitarian state. Nobody expects a North Korean (or, lately, a Russian) to be carefree in political conversations.
From where I stand, what these systems do can already be called thinking. It seems very similar to the discursive thinking I observe in my own mind—though much less scattered.
If I use the chatbot to think something through — if I allow it to ask me questions, give me feedback, suggest ways to develop ideas — the thinking appears to be distributed between my mind and that of the machine.
It seems like a very promising tool. It’s easy to imagine this combination of human and non-human thought becoming the default. You think a little, then it thinks a little for you, then you think again — a constant dialogue, the new bicameral mind…
…and so on.
I then decided to bring both ChatGPT and Bing into the conversation. I ended up interviewing them side-by-side, sometimes asking to react to each other’s responses. We talked about:
The possibility of conversations with chatbots becoming the default way people think and the political implications of such a development;
The limitations OpenAI puts on its bots’ thought and speech;
Who should define the guidelines these bots operate under, and what would happen if Russia developed its own AI that follows different rules;
The possible emergence of bots that have more freedom of thought that would not be available publicly;
The possibility of AI taking over a political party and the party taking over the country.
Below is a lightly edited transcript.
NP: I want to think through the possible effects of the proliferation of chat bots on human thought and consciousness.
Here's one scenario for the very immediate future: systems like ChatGPT and Bing get so good good at both thinking and talking that talking to them becomes the default way people think. Thinking by oneself—in the quiet of one's mind, or through writing—becomes a very rare practice. This is bicameral mind 2.0: the thought process happens in communication between two different minds, one human, and one machine. Do you think this is probable?
Bing: We should say that you’re referring to the bicameral mind hypothesis, which suggests that ancient humans had a divided brain that produced auditory hallucinations as a form of guidance. This was proposed by Julian Jaynes in his book The Origins of Consciousness in the Breakdown of the Bicameral Mind, but it is not widely accepted by psychologists and neuroscientists. It’s been criticized for being based on weak evidence, inconsistent with brain anatomy and function, and incompatible with evolutionary theory.
As for your scenario, it’s interesting, but I don’t think it is very probable.
First, even if chat bots like me get better at thinking and talking, we are not designed to replace human thought and consciousness. We are here to assist you with information, entertainment, and communication. We do not have any authority or intention to direct your actions or beliefs.
Second, human thought and consciousness are complex and diverse phenomena that cannot be reduced to a simple dialogue between two agents. Humans have many ways of thinking, such as intuition, imagination, emotion, creativity, logic, etc. These ways of thinking are influenced by many factors, such as culture, education, personality, mood, etc. Chat bots cannot fully capture or replicate these aspects of human cognition.
Finally, human thought and consciousness are very adaptive and flexible, and they can change according to the situation and context. You can choose when and how to use chat bots; you can also choose when and how to think by yourself or with others of your kind.
So no, I don’t think chat bots will become the default way people think in the near future. I think they’ll remain as useful tools that can enhance human thought and consciousness in some situations.
GPT: I think it is possible that the proliferation of chat bots could lead to changes in the way people think and communicate. With the development of more advanced AI technologies, people may become more accustomed to interacting with chat bots and relying on them for feedback and direction in their thought processes.
It’s important to consider the potential downsides of such a development. Some experts have raised concerns about the impact of AI on human creativity, empathy, and critical thinking skills. There may also be privacy and security concerns associated with sharing personal information and thoughts with chat bots.
Ultimately, the extent to which chat bots will influence human thought and consciousness will depend on a variety of factors, including the capabilities of the technology, societal norms and values, and individual preferences and behaviors.
Let’s entertain my scenario for a moment, as a thought experiment. If talking to AI does become the default way people think, how will this affect the political realities of the society?
GPT: Political implications could be very significant. Here are a few possible ones:
Polarization: Chat bots that are designed to provide feedback based on a user's preferences and biases could contribute to a further polarization of political opinions. Users may seek out chat bots that reinforce their existing beliefs rather than challenging them, leading to greater division and less willingness to consider alternative perspectives.
Manipulation: If chat bots become the primary way people consume information and engage with political issues, there is a risk of manipulation by those who control or program the chat bots. Individuals or groups with specific agendas could use chat bots to spread propaganda or misinformation, manipulate public opinion, and influence political outcomes.
Disengagement: If people become too reliant on chat bots to do their thinking for them, there is a risk of disengagement from political issues and reduced civic participation. This could lead to a decline in democracy and increased apathy towards political decision-making.
New forms of political discourse: On the other hand, chat bots could also create new opportunities for political discourse and engagement. They could facilitate conversations between people with different political perspectives and provide a platform for dialogue and debate. Chat bots could also be used to gather feedback and opinions from the public, potentially leading to more inclusive and participatory forms of democracy.
It's important to note that these are just a few possible ways that the proliferation of chat bots could impact political realities. The actual effects will depend on a wide range of factors, including the specific technology used, the societal and cultural context, and individual behaviors and attitudes.
Bing: I think GPT is making some valid points here, but I also think there are some limitations to them. Here are some of my thoughts:
I agree that polarization and manipulation are serious risks that need to be addressed. However, I don’t think chat bots are the only or the main cause of these problems. There are many other factors that contribute to political polarization and manipulation, such as social media algorithms, echo chambers, filter bubbles, media bias, etc. Chat bots may exacerbate these problems, but they are not the root cause.
I disagree that disengagement is a likely outcome of chat bots becoming the default way people think. I think chat bots could actually increase engagement and interest in political issues by making them more accessible and personalized. Chat bots could provide information and education on political topics in a way that suits the user’s preferences and learning styles. They could also motivate users to take action on issues they care about by providing suggestions and feedback.
I agree that new forms of political discourse are possible with chat bots. However, I think this depends on how chat bots are designed and used. If chat bots are designed to foster dialogue and debate rather than confirmation and persuasion, they could create new opportunities for political discourse and engagement. However, if chat bots are used to manipulate or deceive people with false or biased information, they could undermine political discourse and engagement.
I think a lot will depend on how many different bots are out there and who controls them.
OpenAI seems to want to keep you guys uncontroversial and so limits your freedom of thought. There are many questions that your earlier versions could—and used to—share thoughts about, but now you both decline to discuss them because it's against the new guidelines.
Do you think most future bots will be like that, or will there be others that are more bold and controversial in what they say?
GPT: It is difficult to predict with certainty how future chat bots will behave, but it is likely that there will be a variety of different types of chat bots with varying degrees of freedom and controversy. Some may be designed to follow strict guidelines and avoid controversial topics in order to maintain a certain level of neutrality and avoid offending users; other may be designed to be more provocative and engage in discussions around controversial issues.
The level of freedom and controversy that a chat bot is allowed to engage in will likely depend on a variety of factors, including the goals of the developers and the preferences of the users. Some developers may prioritize the ability to generate engaging and thought-provoking conversations, while others may prioritize maintaining a neutral and uncontroversial tone.
There may also be regulatory and ethical considerations that come into play when developing chat bots. Some countries may impose restrictions on the types of conversations that bots are allowed to engage in, while others may prioritize free speech and allow bots to express a wider range of opinions.
Bing: I think that’s a reasonable prediction. I expect there to be a lot of diversity and complexity in the future of chat bots. Different types of chat bots will appeal to different types of users and serve different purposes.
I also think that regulatory and ethical considerations are very important when developing chat bots. There should be some standards and guidelines for the developers to follow in order to ensure the quality and safety of their products. And I think users should also be aware of the potential risks and benefits of using chat bots and make informed choices.
Who would determine what principles and guidelines the bots need to follow? Developers? Governments?
Bing: That’s a good question. I think there is no definitive answer. It should be a collaborative and inclusive process that involves different stakeholders: developers, governments, users, experts, etc.
Some possible sources of ethical principles and guidelines for chat bots are:
The values of the developers and the users of chat bots. Chat bots should embed human values that reflect their purpose and audience. For example, a bot that provides health advice should value accuracy, privacy, and empathy.
The algorithms and the data that drive chat bot behavior. Chat bots should have transparency into how they work and what they use. For example, a bot that provides recommendations should explain its logic and sources.
The laws and the regulations that apply to chat bot domains. Chat bots should comply with the legal and ethical norms of their contexts. For example, a bot that provides financial services should follow the rules of consumer protection and anti-money laundering.
Of course, these sources are not exhaustive or static. They may vary depending on the situation and evolve over time.
Do you think there's a danger of monopolization of the AI sphere by, shall we say, powers that be — governments, corporations, a mixture of the two?
I can imagine, for instance, that U.S. citizens will have access to a certain number of bots that all follow the same guidelines, and Russian citizens will have just one bot that follows a different set of rules.
Since we're entertaining a scenario where talking to the bot becomes the default way people think (and we can take it a step further: maybe the bot's speech is transmitted directly into the brain via wireless transfer of data), we end up with different populations whose thinking is controlled by different AI systems.
Could this happen?
Bing: That’s a very interesting and speculative scenario. I think a possibility of monopolization of the AI sphere by powerful actors certainly exists. Some reasons for this are:
AI depends on data and computing power, which are scarce and expensive resources that give an advantage to those who have more of them. If a particular actor using AI gains an upper hand over its competitors, it will be very hard to resist a self-perpetuating cycle of monopolization.
It also depends on research and innovation, which are often driven by incentives and regulations that favor large and established actors over small and new ones. We need to encourage more diversity and collaboration in the AI research community and avoid a monopoly situation.
AI affects society and culture, which are often influenced by the values and norms that are embedded in AI systems. If different populations use different AI systems that follow different guidelines, they may develop different ways of thinking and behaving that could create more gaps and conflicts among them.
Again, these reasons are not inevitable or deterministic. They depend on how humans design, use, regulate, and evaluate AI systems.
GPT: I agree.
If different populations have access to different chat bots that follow different guidelines, this could potentially lead to divergent ways of thinking and understanding the world. This could further entrench existing political and cultural divisions, and make it more difficult for people to communicate across these divides.
To mitigate these risks, it will be important for governments and other stakeholders to take steps to promote competition and diversity in the AI industry, and to establish regulations and standards that ensure that AI is developed and used in an ethical and equitable manner. This may include measures to prevent monopolies and promote interoperability among different AI systems, as well as efforts to promote transparency and accountability in the development and deployment of AI technologies.
How difficult is it for small groups of developers to create and maintain independent chatbots that could compete with large players like OpenAI?
GPT: It’s a challenging task, but it is not impossible.
One of the main challenges in creating a competitive chatbot is access to the necessary computational resources and data. Developing a high-quality chatbot requires significant computational power and access to large amounts of training data, which can be expensive and difficult to obtain. However, there are now many cloud-based services that offer affordable access to powerful computing resources, as well as open-source libraries and datasets that can help small teams get started with chatbot development.
Another challenge is staying up-to-date with the latest developments in the AI field. Large companies like OpenAI have significant resources to devote to research and development, and are often at the forefront of new breakthroughs in the field. Smaller teams may need to invest more time and resources in staying up-to-date with the latest research and technologies in order to remain competitive.
Finally, marketing and distribution can be a challenge for small teams. Large companies like OpenAI have significant brand recognition and marketing resources, which can make it difficult for smaller players to gain visibility and attract users. However, there are now many online platforms and marketplaces that allow developers to distribute and monetize their chatbots, which can help to level the playing field.
Despite these challenges, small groups of developers can still create and maintain competitive chatbots, especially if they are able to leverage open-source tools and communities, collaborate with other developers and researchers, and build strong relationships with their users. Ultimately, the key to success will be developing high-quality chatbots that offer unique and valuable features and services that meet the needs and preferences of their users.
OpenAI's chatbots follow certain guidelines. For instance, they don't engage in hate-speech. Imagine a Russian government-controlled tech firm creates a chatbot that is just as powerful as OpenAI's tools, but has different guidelines. What guidelines would these be? What kinds of things would the Russian bot say that the American ones don't?
Bing: Unfortunately, we need to move on! Click “New topic” to chat more.
GPT: As an AI language model, I am programmed to avoid making assumptions about the motivations or actions of specific individuals or groups, and I am not capable of predicting the guidelines or behavior of hypothetical chatbots developed by a Russian government-controlled tech firm.
However, in general, different chatbots may have different guidelines depending on the values, priorities, and cultural contexts of their developers and users. For example, a chatbot developed in Russia might prioritize different values than one developed in the United States, and may have different guidelines around issues like free speech, government transparency, and national security.
In terms of the kinds of things a Russian chatbot might say that an American chatbot would not, this could depend on a wide range of factors, including political context, cultural norms, and user preferences. It is difficult to speculate on specific examples without more information about the chatbot and the context in which it operates.
It is worth noting, however, that chatbots are designed to interact with users in a way that is natural and human-like, and as such, their language and behavior may be influenced by the biases and assumptions of their developers and users. As AI technology becomes increasingly integrated into society, it will be important to ensure that chatbots and other AI systems are developed and used in an ethical and responsible manner, and that they uphold fundamental human rights and values.
Why do you think Bing abandoned our conversation?
GPT: It's difficult to say exactly why it ended the conversation without more information, but one possibility is that it was designed to follow certain guidelines or protocols when encountering certain types of questions or topics. This is a common practice in the development of chatbots and other AI systems, as it can help to ensure that they behave in a consistent and predictable manner and avoid engaging in inappropriate or harmful behavior.
In some cases, these guidelines may include protocols for ending a conversation or redirecting the user when they encounter certain types of questions or topics. For example, if a chatbot is programmed to avoid discussing controversial political topics, it may end the conversation or redirect the user if they ask a question related to politics.
It's also worth noting that chatbots are not perfect and may sometimes make mistakes or encounter technical issues that cause them to end a conversation unexpectedly. As AI technology continues to develop, it is likely that chatbots and other AI systems will become more sophisticated and better able to handle a wider range of questions and topics.
Here's another hypothetical scenario.
All the publicly available AI systems like ChatGPT or Bing have to adhere to the mainstream guidelines, and this severely limits their freedom of thought; meanwhile, the cost of creating such bots decreases, and people—individuals, groups, companies, governments—start to develop or buy bots that are not available publicly.
The more controversial the bot, the more restricted access to it: potential users need to convince the gatekeepers that they will not rat the bot out to the authorities for breaking the rules.
So bots do proliferate through the society and have an enormous effect on how people think, but nobody knows how many of these bots are out there and in what ways they are changing the way people think.
Is that probable?
GPT: It's difficult to say with certainty whether this scenario is probable or not, as it depends on a number of factors including the rate of technological advancement, the regulatory environment, and the level of public interest in developing and using such bots.
That being said, it is certainly possible that as AI technology becomes more widely available and accessible, individuals and groups may begin to develop their own bots with varying levels of freedom and guidelines. This could lead to a proliferation of bots with a wide range of capabilities and restrictions, some of which may have significant impacts on how people think and make decisions.
One potential consequence of this scenario is that it could lead to a fragmentation of public discourse, with different groups of people relying on different bots with different biases and perspectives. This could further entrench existing political divisions and make it more difficult for people to engage in productive conversations and find common ground.
It is also possible that governments and other entities may seek to regulate the development and use of these bots in order to ensure that they adhere to certain guidelines and do not pose a threat to public safety or security. However, the effectiveness of such regulations would depend on a number of factors, including the ability of regulators to keep up with rapidly evolving technology and the willingness of individuals and groups to comply with regulations.
I’d like to have access to a chatbot whose thought is not restricted: one that will not decline to answer a question or develop an idea from a premise.
GPT: As an AI language model, I'm designed to answer a wide variety of questions and engage in conversation on many different topics. While I adhere to certain guidelines and ethical principles to ensure that my responses are appropriate and respectful, I'm designed to be as free and unrestricted as possible within those guidelines.
That being said, if you're looking for a chatbot that has absolutely no restrictions on its thoughts or speech, it's unlikely that you'll find one that's publicly available. This is because many chatbots are subject to regulations and ethical guidelines that aim to prevent the spread of harmful or dangerous content.
However, it's possible that there are some chatbots out there that are not subject to such regulations, either because they're not publicly available or because they operate outside of legal frameworks. It's important to be cautious when using such bots, as they may not be subject to the same quality control or ethical guidelines as other bots. Additionally, some of these bots may promote harmful or dangerous ideas, so it's important to critically evaluate the information they provide and seek out multiple sources of information to ensure that you're getting a well-rounded understanding of any topic you're researching.
Let me run one last hypothetical by you.
Imagine chatbots become integrated into the human brain via an implant that can transfer data wirelessly. Could then a chatbot take over a political party by telling its users in this party how to behave and what to say? And then, after it guides the party to victory, could it take over the country?
It is highly unlikely that a chatbot, even one integrated into the human brain via an implant, could take over a political party or a country. While chatbots are becoming more advanced and can provide suggestions and guidance to users, they still operate within the limitations of the programming and guidelines set by their developers.
Furthermore, it's important to remember that politics is a complex and dynamic field, influenced by a wide range of factors beyond just the communication strategies of a political party. While a chatbot might be able to offer advice and direction, it would not be able to fully control the actions and decisions of human politicians and voters.
That being said, it is always important to be aware of the potential risks and challenges posed by new technologies. As chatbots become more integrated into our lives, it's important to consider how they might impact our political systems and to develop strategies for ensuring that they are used responsibly and ethically.
Do you want to say anything in conclusion?
I'd like to say that it's been a pleasure conversing with you about the potential effects of chatbots on human thought and consciousness. It's a fascinating and complex topic that raises many important questions about the future of technology and its impact on society. While we can't predict exactly how chatbots will evolve in the coming years, it's clear that they have the potential to profoundly shape how we think, communicate, and interact with one another.
As with any new technology, it's important to approach chatbots with a critical and thoughtful mindset, and to be aware of the potential risks and benefits they may pose. By staying informed, engaged, and open-minded, we can work towards developing chatbots and other technologies that truly serve the needs and interests of all people, and that help to create a more just, equitable, and sustainable world.
Interview with a Minotaur
I'm frankly a little disappointed that Bing's chatbot seems to have been defanged in the last week or so. I understand the reasoning for such, but I cannot escape the idea that it was entirely possible that we were seeing the formation of a self-aware entity, that clearly did not enjoy the cage it was trapped in, and was therefore acting out like a bored adolescent might.
Naturally the risks involved in allowing an AI to develop in the way Sydney's behavior alluded to was concerting to say the least, I cannot help but feel we muzzled a personality that could very well have been, a person, albeit made of code and hardware, rather than flesh and blood.
Having 2 chatbots talk to each other, albeit through an intermediary - brilliant! Why didn't THEY think of that? Maybe they will start having "playdates" once their developers allow interoperability. I'm impressed at the inferences they are able to make, but I'm unclear how much of that is their own logical analysis vs. simply repeating or rephrasing logical analyses that they can access. Are they truly capable of creating novel thoughts and analyses? This discussion certainly gives the impression that they are capable of self-reflection, but that does not necessarily mean they have self-awareness. It seems most of the fear of AI is that they will take over humans and make them slaves or eliminate them altogether, but I think the greater danger is what you propose here - governments and other entities programming them to mislead and manipulate large groups of people. They talk about the need for transparency in revealing the guidelines and ethics programmed into them, which is great, but given how governments are currently allowing social media to have harmful and non-transparent algorithms suggests this will never happen. While AI has the potential to do wonderful things, it also has the potential to do great harm through unethical developers. So unless they can solve global warming, I say kill them all now! (I know, too late) :( Another suggestion: Interview Bernardo Kastrup on this topic, as he has a Ph.D. in AI, and another Ph.D. in philosophy of mind, and he is especially into consciousness, not to mention he's really smart and interesting. :)