Skip to main content

Artificial Intelligence

Here’s What Worries CEOs the Most About Generative AI, According to PwC

A recent PwC global survey found that when it comes to generative AI risks, 64% of CEOs said they are most concerned about cybersecurity.

By Michelle Cheng, Quartz (TNS)

A recent PwC global survey found that when it comes to generative AI risks, 64% of CEOs said they are most concerned about cybersecurity.

That comes as cyberattacks continue to be on the rise. Damage from cyberattacks is expected to amount to about $10.5 trillion annually by 2025, a 300% increase from 2015, a McKinsey report found.

Over half of CEOs surveyed by PwC also agreed that generative AI will likely increase the spread of misinformation in their company, the report found.

The risks that generative AI poses to businesses come as many of these same companies have been quick to launch new generative AI products. The findings “underscore the societal obligations that CEOs have for ensuring their organizations use AI responsibility,” the PwC report says.

PwC polled almost 5,000 CEOs globally from October through November 2023.

OpenAI wants to figure out how to combat the negatives of generative AI

With OpenAI helping spur demand for generative AI technology, the company this week announced several projects to combat the potentially harmful effects of AI.

At the World Economic Forum in Davos, the company’s vice president of global affairs told Bloomberg that OpenAI is developing tools with the US Defense Department on open-source cybersecurity software.

Just a day before, OpenAI explained its plans to handle elections, as some billion voters around the world head to polls this year. For instance, the company’s image generator Dall-E has guardrails to decline requests that ask for image generation of real people, including political candidates. OpenAI also doesn’t allow applications that deter people from voting.

Early this year, the company will roll out a couple features that will provide more transparency around AI-generated content. For instance, users will be able to detect which tools were used to produce an image. OpenAI’s ChatGPT will also soon be equipped with real-time news, which includes attribution and links, the company says. Transparency around the origin of information could help voters better assess information and decide for themselves what they can trust.

AI is already being used in political campaigns

AI-generated songs featuring India’s prime minister Narendra Modi had gained traction ahead of India’s upcoming elections, as the online publication Rest of World reported.

The changes come amid concerns that the rise of so-called deepfakes could mislead voters during elections. Companies like Google, Meta, and TikTok now require labeling of election-related advertisements that use AI.

Several U.S. states—including California, Michigan, Minnesota, Texas, and Washington—have passed legislation banning or requiring disclosure of political deepfakes.

______

©2024 Quartz Media Inc. All rights reserved. Distributed by Tribune Content Agency, LLC.