OpenAI and Johansson Comms
/In what The New York Times refers to as a “lengthy statement,” actress Scarlett Johansson describes OpenAI’s use of a voice that sounds like hers. This situation offers much to explore with students, for example, integrity, brand reputation, voice recognition, deepfakes/synthetic media, and of course, writing.
Apparently, OpenAI CEO Sam Altman asked Johansson whether the new ChatGPT could use her voice. She declined, but the company may have used it, anyway. Altman seemed to confirm doing so in a post that refers to the movie Her, which starred Johansson as an affectionate virtual assistant. OpenAI agreed to pull her voice, and Altman tweeted, “also for clarity: the new voice mode hasn't shipped yet (though the text mode of GPT-4o has). what you can currently use in the app is the old version. the new one is very much worth the wait!”
A few days later, OpenAI published a statement about how voices are selected and explained that the likeness to Johansson’s was just that, a likeliness, recorded by an unnamed actor. Even so, Altman’s post seems to fuel the controversy.
I’m stuck on the NYT description of Johansson’s statement as “lengthy.” It’s 312 words. Business communication students can identify the communication objectives and decide whether they agree with this characterization. If it’s too long, what could be omitted? I’m not finding much fluff in her explanation of what happened and the significance.
Maybe the comparison is to Altman’s single-word Her, which might be enough to hit his own communication objectives. One writer’s view is that this situation illustrates OpenAI as ”a company with little regard for the value of creative work led by a scheming, untrustworthy operator.” The story may have raised the profile of ChatGPT but hasn’t helped OpenAI’s reputation.