News
USA | Jun 5, 2024

Current and former employees of OpenAI, Google DeepMind warn about major AI risks

Shemar-Leslie Louisy

Shemar-Leslie Louisy / Our Today

Reading Time: 2 minutes
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration//File Photo

Current and former employees at artificial intelligence (AI) companies Microsoft-backed OpenAI and Alphabet’s Google DeepMind on Tuesday (June 4) raised major AI concerns.

In an open letter by 13 current and former employees of OpenAI and Google DeepMind, the industry insiders say the financial motives of AI companies hinder effective oversight.

“We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter added.

The group warns of major risks from unregulated AI and calls for enhanced transparency and accountability measures within the industry to mitigate potential dangers.

FILE PHOTO: Visitors stand near a sign of artificial intelligence at an AI robot booth at Security China, an exhibition on public safety and security, in Beijing, China June 7, 2023. REUTERS/Florence Lo/File Photo – RC24E1AXHNMZ

“These risks include deepening existing social inequalities, spreading misinformation, and the possibility of losing control over autonomous AI systems, which could have catastrophic consequences, including human extinction,” the letter reads.

The letter points out that AI companies cannot be relied upon to share information about the capabilities and limitations of their systems, voluntarily.

“Without effective government oversight, current and former employees are among the few capable of holding these companies accountable,” it continues.

To address these challenges, the coalition calls on AI companies to commit to the following principles:

No retaliation for criticism

AI companies should not enter into or enforce agreements prohibiting criticism or “disparagement” related to risk concerns, nor should they retaliate by hindering any vested economic benefits.

Anonymous reporting mechanism

Companies should facilitate a verifiably anonymous process for employees to report risk-related concerns to the company’s board, regulators, and an independent organization with relevant expertise.

Culture of open criticism

AI companies should promote an environment where employees can freely raise concerns about technologies without fear of retaliation, provided trade secrets and intellectual property are protected.

Protection for whistleblowers

Companies should not retaliate against employees who share risk-related confidential information publicly, especially if other reporting processes have failed. An adequate anonymous reporting mechanism should be established, but until it exists, employees should retain the right to public disclosure.

Comments

What To Read Next