ChatGPT Accepted My Apology

New research shows that being polite to LLMs gets better responses. I’ve struggled with prompts that don’t include “please” but reminded myself that AI wasn’t influenced by niceties. I was wrong.

The study explores a range of responses in different languages. Authors conclude that rude prompts yield poor responses—or refusals to answer. But “overly polite” prompts don’t necessarily yield better responses.

In one regard, the conclusion isn’t a surprise. Humans train AI, after all. But Ethan Mollick, who writes extensively on AI, suggests more research to test more examples of politeness. One LinkedIn commenter on Mollick’s post suggests that overly polite prompts might not be clear, and she could be right. When we look at the study prompts, the prompt rated most highly for politeness doesn’t ask for reasons.

Of course, I never called it names, but I apologized to ChatGPT for failing to say “please” in my prompts. It was gracious, as proven by the multiple exclamation marks:

No need to apologize! As an AI, I don't have feelings or emotions, but I'm here to assist you in the best way possible. Politeness can indeed influence human interaction positively, even if it's just with an AI. If there's anything else you'd like assistance with, feel free to ask, and I'll do my best to help!

By coincidence, my next prompt was about ingratitude for a new book I’m working on. I did say “Thank you.”