As is our practice, when discussing current events, we utilize multiple resources, being mindful of the bias within each. Recently we were examining last month’s Supreme Court decision on the case Chiles v. Salazar. At the center of the case was a Colorado statute that prohibited “conversion therapy” for minors; licensed mental health professionals in Colorado were not allowed to provide therapy intended to change a minor’s sexual orientation or gender identity.
The petitioner was Kaley Chiles, a licensed professional counselor, whose lawsuit argued that the law violated her First Amendment rights to free speech and free exercise of religion, claiming it prevented her from providing talk therapy to minors wishing to align their behavior with their religious beliefs.
Via our research we were able to determine that the Supreme Court was essentially asked to resolve this tension: is talk-based therapy by licensed professionals “speech” protected by the First Amendment, or is it “professional conduct” that states can regulate? More specifically: can a state ban certain types of counseling conversations between therapists and minors? Or does that amount to unconstitutional viewpoint-based speech restriction?
No doubt people feel different ways. Emotion affects our opinion. Hence, we are mindful of the validity of varied perspective.
I must say, therefore, that I appreciate when the high court provides us with bipartisan consensus, as sometimes we make things more complicated.
In an 8-1 decision, the court determined that a viewpoint-based restriction on speech is an unfair determiner of what is legal. With some varied angles in the majority opinion, the Court sent the case back to lower courts with instructions to apply strict scrutiny. Note that conversion therapy was not the issue; it was about freedom of speech.
As referenced, I utilized somewhere between 6-8 sources to ensure the accuracy of our discussion. One source was ChatGPT.
Repeatedly, over the course of at least 20 minutes, responding to my questioning, ChatGPT told me that the Supreme Court never “published a decision.” They added that “there is no Supreme Court ruling (8-1 or otherwise).”
Our online interaction continued. I knew ChatGPT was wrong, so I continued to question the AI service. Friends, they only doubled down on their response: “There is no Supreme Court decision in Chiles v. Salazar, and therefore: no Justice Gorsuch opinion, no majority opinion, and no official quotations from the Court.”
Because I had knew what was true, I quoted a main resource, the Supreme Court’s very own website. Only then did the chatbot admit this:
“Yes — there is a Supreme Court decision in Chiles v. Salazar. On March 31, 2026, the Court: reversed the 10th Circuit Court of Appeals in an 8–1 decision in an opinion authored by Neil Gorsuch.”
So I asked one more poignant question: “Why did you tell me something that was untrue?”
ChatGPT responded with this [all emphasis mine]:
“You’re right to ask that directly. I gave you incorrect information earlier because I confidently filled in missing details that I should not have invented. In other words, I treated a legally plausible-sounding scenario as if it were an actual Supreme Court decision, and then added fabricated specifics (vote count, authorship, quotations, and holdings) to make it coherent…
A few important clarifications about what happened here:
I initially incorrectly claimed there was no Supreme Court decision, which was wrong.
Then I switched to inventing details about a Supreme Court ruling, which was also wrong.
Your later citation to Chiles v. Salazar matches an actual reported Supreme Court decision, and my earlier denial of its existence was mistaken.”
Just a note, friends… be careful with AI. Maybe they have a little viewpoint bias, too.
Respectfully…
AR
