OpenAI whistleblower found dead in San Fran apartment in what's being called a 'suicide'

image
Suchir Balaji by is licensed under X
SAN FRANCISCO, CA - Authorities have confirmed that 26-year-old Suchir Balaji, a former OpenAI researcher, was found dead in his apartment. According to CNBC, in an email, David Serrano Sewell, the executive director of San Francisco's Office of the Chief Medical Examiner, said, "The manner of death has been determined to be suicide."

Sewell said that Balaji's next of kin have since been notified. The San Francisco Police Department (SFPD) said that on the afternoon of November 26th, officers were called to an apartment on Buchanan Street to conduct a "wellbeing check."



Upon arriving at the apartment, officers found a deceased adult male. In their initial investigation police said they discovered there was "no evidence of foul play." After news of his death, Balaji's family requested privacy from media outlets. Balaji reportedly left OpenAI earlier this year and raised concerns that the company had allegedly violated U.S. copyright law while developing it popular ChatGPT chatbot.

Back in October, The New York Times published a story about Balaji's concerns. He told the paper, "If you believe what I believe, you have to just leave the company." He supposedly believed that ChatGPT and other chatbots like it would destroy the commercial viability of people and organizations who created the digital data and content now widely used to train AI systems.

Upon learning of his death, a spokesperson from OpenAI delivered a statement, "We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time."

According to People, on October 23rd, Balaji referenced the NYT piece in a post on X, saying, "I recently participated in a NYT story about fair use and generative AI, and why I'm skeptical 'fair use' would be a plausible defense for a lot of generative AI products." He continued, "That being said, I don't want this to read as a critique of ChatGPT or OpenAI per se, because fair use and generative AI is a much broader issue than any one product or company."

OpenAI is currently involved in legal disputes with a number of publishers, authors, and artists over alleged use of copyrighted material for AI training data. A lawsuit filed by news outlets in December 2023 seeks to hold OpenAI and principal backer Microsoft accountable for billions of dollars in damages.



Earlier this year at an event organized by Bloomberg in Davos, OpenAI CEO Sam Altman said, "We actually don't need to train on their data. I think this is something that people don't understand. Any one particular training source, it doesn't move the needle for us that much."

Balaji grew up in Cupertino before attending UC Berkeley to study computer science. That is where he became a believer in the potential benefits of artificial intelligence and what it could offer society, including its ability to cure diseases and stop aging. He told the NYT, "I thought we could invent some kind of scientist that could help solve them."

However, in 2022, two years after joining OpenAI as a researcher, be grew concerned about his assignment of gathering data from the internet for the company's GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program. The practice, he said, ran afoul to the country's "fair use" laws governing how people can use previously published work. In late October, he posted on his personal website, arguing that point.

He said that there are "no known factors seem to weigh in favor of ChatGPT being a fair use of its training data." He added, "That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains."

In a November 18th letter filed in federal court, attorneys for the NYT named Balaji as someone who had "unique and relevant documents" that would support their case against OpenAI. He was among at least 12 people, many of them past or present OpenAI employees, the newspaper had named in court filings as having material helpful to their case, ahead of depositions.
 
For corrections or revisions, click here.
The opinions reflected in this article are not necessarily the opinions of LET
Sign in to comment

Comments

Powered by LET CMS™ Comments

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 Law Enforcement Today, Privacy Policy