A whistleblower filed a complaint with the Securities and Exchange Commission alleging that the artificial intelligence company OpenAI has illegally forbidden its employees from reporting potential dangers of its technology to federal regulators. The filing also calls for investigating whether the employment, severance, and nondisclosure agreements may penalize workers for raising concerns about the company’s practices.
The whistleblowers claim that OpenAI’s contracts required the employees to waive their federal rights to whistleblower compensation and due payment for reporting frauds in government securities, with an added requirement that they obtain prior consent from the company before any such disclosure is made to federal authorities. Under this type of agreement, they say, no exemption was provided to reporting securities violations to the SEC and therefore violated the long-standing federal laws aimed at protecting whistleblowers.
“These contracts sent a message that ‘we don’t want employees talking to federal regulators,” said one whistleblower, who requested anonymity due to fear of retaliation. “I don’t think that AI companies can build technology that is safe and in the public interest if they shield themselves from scrutiny and dissent.”
In response, OpenAI spokesperson Hannah Wong said, “Our whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe rigorous debate about this technology is essential and have already made important changes to our departure process to remove nondisparagement terms.”
The complaint comes amid growing concerns that OpenAI, which was founded as a nonprofit with an altruistic mission, is sacrificing safety in the name of profit. Reports have emerged that OpenAI rushed the release date of its latest AI model, which runs ChatGPT, to meet a May deadline despite internal concerns that security testing had not been thorough enough. OpenAI spokesperson Lindsey Held countered that the company “didn’t cut corners on our safety process, though we recognize the launch was stressful for our teams.”
The use of strict confidentiality agreements by tech companies has been highly controversial for quite a long time. During the MeToo movement and national protests following George Floyd’s murder, workers highlighted how such agreements limited them from reporting misconduct. Regulators also worry that these types of terms have prevented tech employees from alerting them to the misconduct in an industry where algorithms can easily swing an election, put children at risk, or drive certain health actions.
It’s the runaway rise of AI that has seen a quick increase in the level of fear by makers over tech industry power, making them call for regulations on it. AI companies are lightly regulated in the U.S., and policymakers say they rely on whistleblowers to understand the risks of fast-developing technology.
“OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” Sen. Chuck Grassley, R-Iowa, said in a statement. “In order for the federal government to stay one step ahead of artificial intelligence, OpenAI’s nondisclosure agreements must change.”
The SEC should take “swift and aggressive” enforcement action against such agreements, which would have wider implications for the AI sector while violating the recent White House executive order calling for the development of safe AI. The whistleblowers, in their letter, underline that workers are best placed to identify and sound the alarm about dangers and ensure AI benefits humanity rather than leading to harm.
The SEC has acknowledged receipt of the complaint, but not whether or not an investigation is opening. It’s important that the employees come forward and OpenAI’s as transparent as possible.
The letter implores the SEC to order OpenAI to produce all agreements containing nondisclosure clauses, notify past and current employees about their rights, and mete out fines for every improper agreement. It urges the SEC to ensure that OpenAI corrects this “chilling effect” of its past practices.
With the tech industry still far from done wrestling with the ethical concerns of AI, a place for whistleblowers remains intrinsic to retaining transparency and accountability.