Group of OpenAI colleagues warn race for AI could lead to ‘human extinction’

A group of former and current OpenAI employees released a letter online expressing concerns about the effects and serious risks artificial intelligence (AI) technology could have on humanity.

The letter, which was posted on righttowarn.ai, was signed by five former OpenAI employees, a current and a former employee from Google DeepMind, four unnamed current employees at OpenAI and two unnamed former OpenAI employees.

The group said the risks AI poses range from “the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

“AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts,” they wrote. “We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public.”

APPLE’S WWDC TO BE AI TURNING POINT FOR TECH GIANT

Several former and current OpenAI employees wrote a letter expressing concerns that the race for artificial intelligence could lead to human extinction if not regulated. (Dilara Irem Sancar / Anadolu / File / Getty Images)

The group says AI companies have strong financial incentives to avoid effective oversight as well as substantial information about the capabilities and limitations of their systems.

For instance, the companies have nonpublic information about the adequacy of their protective measures and the risk levels of different types of harm that could come as a result of AI advancements.

Daniel Kokotajlo, one of the group members and a former researcher in OpenAI’s governance division, told the New York Times OpenAI is excited about building artificial general intelligence, or AGI, but they are “recklessly racing to be there first.”

ZOOM CEO WANTS CUSTOMERS TO SEND THEIR AI-POWERED ‘DIGITAL TWINS’ TO FUTURE MEETINGS

Robot hand reaching through computer to stock charts

A former researcher in OpenAI’s governance division said the company was “recklessly racing” to be the first to introduce artificial general intelligence. (iStock)

Kokotajlo said he previously predicted that AGI could arrive by 2050, though after seeing how quickly the technology was advancing, he told the Times there is a 50% chance it could arrive by 2027.

He also said he believes the probability that advanced AI will destroy or cause catastrophic harm to humanity is 70%.

The group said they do not think the companies can be relied upon to share the information voluntarily.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” they wrote.

Confidentiality agreements prevent the members of the group from voicing their concerns, and they said ordinary whistleblower protections are insufficient, saying they focus on illegal activity. AI has yet to be regulated and, therefore, cannot be considered illegal activity.

APPLE UNVEILS ‘APPLE INTELLIGENCE’ AT WWDC

Samuel Altman, CEO of OpenAI, greets committee chairman Sen. Richard Blumenthal (D-CT)

Sam Altman, CEO of OpenAI, right, is working with legislators around the world to come up with regulations on artificial intelligence. (Win McNamee / File / Getty Images)

OpenAI told FOX Business it agrees with the letter’s call for government regulation of the AI industry, as it was the first in the industry to call for such regulation.

The company also said they regularly engage with policymakers around the world and are encouraged with the progress being made.

An OpenAI spokesperson also said they have a track record of not releasing technology until the necessary safeguards are in place.

Because its products are used by 92% of Fortune 500 companies, safe and reliable systems are critical. The spokesperson said that if the products were not safe, the companies would not subscribe.

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” the OpenAI spokesperson said. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

GET FOX BUSINESS ON THE GO BY CLICKING HERE

“This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company,” the spokesperson added.

Leave a Comment

Your email address will not be published. Required fields are marked *