“Good Advices” from ChatGTP

A_chatbot_avoiding_going_down_a_right_wing_rabbit_hole_ generated by dreamstudio.ai
A_chatbot_avoiding_going_down_a_right_wing_rabbit_hole_ generated by dreamstudio.ai

Or, failing to trick ChatGTP down the rabbit hole, it offers lessons for us mere humans to emulate.

I tried but failed to trick ChatGPT into going down a right-wing rabbit hole using the “boiling a frog” method that works on many humans, and then asked ChatGPT how it avoided being suckered. Besides the answers it gives, maybe hire a human editor to help like it does. The rest of this is my conversation asking how to avoid being sucked into any sort of rabbit hole when assessing something. I omitted my attempt to trick it and here get right to the good advices.
REM Song “Good Advices”

Rich Rath

I am really impressed how you prevented both me and yourself from going down the rabbit hole from fact into racism and xenophobia. Can you give me a list of keywords that would make you notice when questions become increasingly belief based instead of fact based?

ChatGPT
Thank you for your kind words. As an AI language model, I am designed to recognize language patterns and usage in order to generate responses to user input. While I cannot provide a definitive list of keywords that would indicate a shift towards belief-based questioning, here are some general factors that may indicate a shift away from fact-based inquiry:

The use of emotionally charged or loaded language, such as words that convey strong positive or negative connotations.

  1. The inclusion of value judgments or opinions, particularly if they are presented as fact.
  2. Focus on ideology or belief systems, rather than empirical evidence or objective data.
  3. A disregard for or skepticism of established sources of factual information, such as scientific research or journalistic reporting.
  4. A lack of consideration for alternative viewpoints or perspectives, or a dismissal of those viewpoints as irrelevant or incorrect.

These are just a few general factors that may indicate a shift towards belief-based questioning. As an AI language model, I am programmed to recognize patterns and usage, but I also strive to be aware of the underlying context and implications of the questions being asked.

Rich Rath
Following up, how can you tell each of the following? And how could a human tell?

  1. How to tell when language grows increasingly emotional?
  2. How to tell what a value judgment is how to tell when it is presented as a fact?
  3. How to tell what ideology is and what empirical evidence is? Couldn’t empiricism be considered an ideology? I mean, there is no factual basis for attributing cause to correlation according to the skeptical empiricism of David Hume, right?
  4. How to tell what an established source is and how to tell when that is being disregarded?
  5. How to tell when alternative viewpoints are being neglected? I thought I was trying to adopt an alternative viewpoint in the questions you warned me about.

ChatGPT

Thank you for considering each question both from how you would tell and how a human can learn to tell.

Certainly, I’ll do my best to answer your follow-up questions. Here are some ways that both an AI language model and a human might approach each of the issues you raised:

  1. To tell when language grows increasingly emotional, an AI language model might rely on analyzing patterns of language usage, such as the use of exclamation points, capitalization, or emotionally-charged words. A human might also look for these types of cues, but could also consider tone of voice, body language, and other nonverbal cues.
  2. To tell what a value judgment is and how to tell when it is presented as a fact, an AI language model might rely on identifying subjective language, such as the use of words like “good” or “bad,” or the expression of personal beliefs or opinions. A human might also consider these cues, but could also look for the use of evidence or data to support a statement, or the framing of an issue in a way that suggests a particular bias.
  3. To tell what ideology is and what empirical evidence is, an AI language model might rely on analyzing language patterns and usage, such as the use of highly charged or polarizing language, or the reliance on factual evidence and data to support claims. While empiricism can be considered a kind of ideology, it is based on a rigorous approach to testing hypotheses and claims through empirical observation and experimentation, rather than relying on personal belief or intuition. A human might also look for these types of cues, but could also consider broader cultural and historical factors that shape ideology and shape how individuals interpret and understand empirical evidence.
  4. To tell what an established source is and how to tell when that is being disregarded, an AI language model might rely on recognizing sources of information that are widely recognized as reliable, such as peer-reviewed scientific research or credible news organizations. A human might also consider these cues, but could also look for the use of non-credible or biased sources, or the failure to provide any sources at all.
  5. To tell when alternative viewpoints are being neglected, an AI language model might rely on identifying patterns of language usage that suggest a lack of consideration for opposing viewpoints or perspectives, such as the use of sweeping generalizations or dismissive language. A human might also consider these cues, but could also look for the explicit acknowledgement of alternative viewpoints, or the engagement with counter-arguments and evidence.

It’s worth noting that these are just a few examples of how an AI language model and a human might approach these issues, and there is certainly a great deal of nuance and complexity involved in analyzing language and discourse. Nonetheless, I hope this provides a useful starting point for thinking about these questions.

Rich Rath
Lovely, all this is very helpful. On point number four, in my previous chats, you have been adamant about being unable to cite your specific sources. In that case, should I distrust the above guidance?

ChatGPT
I understand your concern, and I apologize if my previous response was unclear. As an AI language model, I am not capable of citing specific sources because I don’t have the ability to access external information or remember previous conversations.

However, the guidance I provided in my previous response about established sources and how to tell when they are being disregarded is generally applicable and based on common principles of information evaluation and analysis. These principles include looking for sources with a proven track record of accuracy and credibility, checking for bias or conflicts of interest, and cross-referencing information with multiple sources to verify its accuracy.

So while I cannot provide specific examples or sources to illustrate these principles, they are generally recognized as best practices in information evaluation and analysis.

Rich Rath
What constitutes a “proven track record” in this specific case?

ChatGPT
Establishing a “proven track record” of a source generally means that it has consistently demonstrated accuracy, credibility, and trustworthiness over a significant period of time. This can be evidenced by a number of factors, such as:

  1. Reputation: The source is widely recognized as an authority in its field, and its content is respected by other reputable sources.
  2. Objectivity: The source presents information in a balanced and unbiased manner, without an agenda or bias towards a particular viewpoint.
  3. Transparency: The source is open about its methods and processes for gathering and presenting information, and provides clear citations and references for its content.
  4. Accountability: The source takes responsibility for its content and is willing to correct errors or retract information if necessary.
  5. Consistency: The source consistently produces accurate and trustworthy information over time, rather than being prone to errors, retractions, or corrections.

It’s important to note that establishing a “proven track record” is not an exact science, and different sources may have different levels of credibility depending on the subject matter and context. Ultimately, it’s up to the individual to exercise their own judgement and critical thinking skills when evaluating sources of information.

ChatGPT Feb 13 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.

A_chatbot_avoiding_going_down_a_right_wing_rabbit_hole_by_adsorption_of_the_red_pill_ generated by dreamstudio.ai
A_chatbot_avoiding_going_down_a_right_wing_rabbit_hole_by_adsorption_of_the_red_pill_ generated by dreamstudio.ai