The article discusses the paradox of AI industry leaders warning about potential apocalyptic scenarios due to AI while continuing to invest and deploy AI technologies, and suggests that these warnings might distract from more immediate harms such as misinformation spread, biases, and enabling discrimination1.
Who among the following testified before Congress about the potential for AI to spread misinformation and disrupt elections?
A) Elon Musk
B) Kevin Scott
C) Sam Altman
D) Demis Hassabis
Which AI tools were mentioned in the article as being trained on vast data to create compelling written work and images?
A) Siri and Alexa
B) OpenAI’s ChatGPT and Dall-E
C) IBM’s Watson and Deep Blue
D) Google’s DeepMind and AlphaGo
What argument did Emily Bender put forth about the potential diversionary tactics of AI companies?
A) Companies are hiding the fact that AI is not as smart as it seems.
B) Companies may be diverting attention from immediate concerns to hypothetical scenarios.
C) Companies are trying to influence regulators by asserting their unique understanding of AI risks.
D) All of the above.
Marketers should be aware of the inherent biases in AI and the potential for their misuse in spreading misinformation. The public image of AI is highly sensitive and can be impacted by both the perceived benefits and potential harms of the technology.
Link to the article: CNN Article