AI Risk Communications
Two new messages about risks associated with AI are good examples for students to analyze.
Center for AI Safety published a short, joint statement about AI risks. The introduction, which explains the statement, is longer than the 22-word message itself. Unlike a longer statement published two months ago to encouraged a pause, this one is bold and focused:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The authors use analogies as emotional appeal to persuade their audiences. They also rely credibility, with more than 350 distinguished signatories, including current AI leaders and two Turing Award AI pioneers.
The second message is a blog post written by OpenAI founders to provide guidance for regulators and others wanting to mitigate risk. Titled, Governance of Superintelligence, the post distinguishes between current AI technology and the next generation. The authors’ strategy is to create a sense of urgency about an “existential” threat but prevent overregulation of current technology (like OpenAI, of course). In this statement, they use the analogies of nuclear energy and synthetic biology. The latter might be a better parallel than the pandemic, although a pandemic is more current and may be more universally understood.
Students can edit the governance post for clarity and conciseness. They’ll find overuse of “there is/are” and an abundance of “it,” for example, in this last sentence:
Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.