Claude Wins a Writing Contest

A Washington Post reporter compared five AI tools and found Claude the clear winner. The prompts and analysis are interesting for a class discussion and activity.

The five prompts covered a range of communication topics: an apology to a friend, a CEO layoff message, a request to a spouse, a weird work proposal, and a breakup text. Five judges, “who have all written books and teach courses on communication,” rated the tools in this order:

  1. Anthropic’s Claude

  2. DeepSeek

  3. Google’s Gemini

  4. OpenAI’s ChatGPT

  5. Microsoft Copilot

Judges found Copilot particularly “stilted” and “robotic,” generating the dreaded “hope you’re well.” That’s too bad because Copilot is built into all Microsoft 365 products, a popular choice for work.

I wish we could see all the writing samples and judges’ feedback, but the article includes only a couple of examples. But students could use the same prompts for a class activity and compare results.

We could ask students to put more effort into the prompt, with detailed context and a more thorough audience analysis. We also could give students more specific guidance for evaluating the results. Or, students could create their own prompts. With more instructions—both to AI and to students—students might rank the tools differently.