Tech

OpenAI on the defensive after a number of PR setbacks in a single week

[ad_1]

The OpenAI logo under a raincloud.

For the reason that launch of its newest AI language mannequin, GPT-4o, OpenAI has discovered itself on the defensive over the previous week attributable to a string of unhealthy information, rumors, and mock circulating on conventional and social media. The unfavorable consideration is probably an indication that OpenAI has entered a brand new stage of public visibility—and is extra prominently receiving pushback to its AI strategy past tech pundits and government regulators.

OpenAI’s tough week began final Monday when the corporate previewed a flirty AI assistant with a voice seemingly impressed by Scarlett Johansson from the 2013 movie Her. OpenAI CEO Sam Altman alluded to the movie himself on X simply earlier than the occasion, and we had beforehand made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

Whereas that September replace included a voice referred to as “Sky” that some have stated seems like Johansson, it was GPT-4o’s seemingly lifelike, new conversational interface, full with laughing and emotionally charged tonal shifts, that led to a extensively circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Subsequent, a Saturday Night Live joke strengthened an implied connection to Johansson’s voice.

After hearing from Johansson’s lawyers, OpenAI announced it was pausing use of the “Sky” voice in ChatGPT on Sunday. The corporate particularly talked about Sky in a tweet and Johansson defensively in its weblog publish: “We consider that AI voices shouldn’t intentionally mimic a star’s distinctive voice—Sky’s voice isn’t an imitation of Scarlett Johansson however belongs to a distinct skilled actress utilizing her personal pure talking voice,” the corporate wrote.

On Monday night, NPR information reporter Bobby Allyn was the first to publish a press release from Johansson saying that Altman approached her to voice the AI assistant final September, however she declined. She says that Altman then tried to contact her once more earlier than the GPT-4o demo final week, however they didn’t join, and OpenAI went forward with the obvious soundalike anyway. She was then “shocked, angered, and in disbelief” and employed attorneys to ship letters to Altman and OpenAI asking them for element on how they created the Sky voice.

“In a time once we are all grappling with deepfakes and the safety of our personal likenesses, our personal work, our personal identities, I consider these are questions that deserve absolute readability,” Johansson stated in her assertion. “I look ahead to decision within the type of transparency and the passage of applicable laws to assist be sure that particular person rights are protected.”

The repercussions of those alleged actions on OpenAI’s half are nonetheless unknown however are prone to ripple outward for a while.

Superalignment group implodes

The AI analysis firm’s PR woes continued on Tuesday with the high-profile resignations of two key security researchers: Ilya Sutskever and Jan Leike, who led the “Superalingment” team targeted on making certain that hypothetical, at present non-existent superior AI programs don’t pose dangers to humanity. Following his departure, Leike took to social media to accuse OpenAI of prioritizing “shiny products” over essential security analysis.

In a joint statement posted on X, Altman and OpenAI President Greg Brockman addressed Leike’s criticisms, emphasizing their gratitude for his contributions and outlining the corporate’s technique for “accountable” AI growth. In a separate, earlier publish, Altman acknowledged that “we’ve much more to do” concerning OpenAI’s alignment analysis and security tradition.

In the meantime, critics like Meta’s Yann LeCun maintained the drama was a lot ado about nothing. Responding to a tweet the place Leike wrote, “we urgently want to determine methods to steer and management AI programs a lot smarter than us,” LeCun replied, “It appears to me that earlier than ‘urgently determining methods to management AI programs a lot smarter than us’ we have to have the start of a touch of a design for a system smarter than a home cat.”

LeCun continued: “It is as if somebody had stated in 1925 ‘we urgently want to determine methods to management aircrafts [sic] that may transport lots of of passengers at close to the pace of the sound over the oceans.’ It will have been troublesome to make long-haul passenger jets protected earlier than the turbojet was invented and earlier than any plane had crossed the Atlantic continuous. But, we are able to now fly midway around the globe on twin-engine jets in full security.  It did not require some form of magical recipe for security. It took many years of cautious engineering and iterative refinements.”

[ad_2]

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button