Scarlett Johansson ‘shocked, angered’ over ‘eerily similar’ ChatGPT voice – National

Though she may have once voiced a fictional operating system in the movie Her, Scarlett Johansson said she has no interest in speaking for real-life artificial intelligence (AI).

On Monday, Johansson said a newly released ChatGPT voice, named “Sky,” sounded “eerily similar” to her. The AI voice “shocked” and “angered” Johansson, 39, who revealed that nine months ago she declined an offer from OpenAI CEO Sam Altman to work on their new voice chatbot.

The voice, alongside four others, was created for the current ChatGPT 4.0 system and was released last week.

The company announced on Monday it would pause the use of Sky after it was widely compared to Johansson. OpenAI did not specify why it opted to silence Sky.

“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,” Johansson wrote in a statement, which was shared by NBC News.

Story continues below advertisement

“He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI,” she continued. “He said he felt that my voice would be comforting to people.”

After consideration, Johansson said she declined to work with OpenAI for “personal reasons.”

“Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me,” she wrote. “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.”

On Sunday, OpenAI denied any intentional likeness between ChatGPT’s Sky and Johansson. Rather, the company said Sky and the four other voices (Breeze, Cove, Ember and Juniper) were created using voice actors who received “top-of-market” pay rates.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote in a release. “To protect their privacy, we cannot share the names of our voice talents.”

In a statement to NBC, Altman again denied any intentional similarities to Johansson.

Story continues below advertisement

“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” he said. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”


Breaking news from Canada and around the world
sent to your email, as it happens.

In Johansson’s statement, she pointed to a May 13 post to X — the same day OpenAI demoed ChatGPT 4.0 and its voice chat feature — from Altman that simply read, “her,” seemingly a comparison to Johansson’s role in the film of the same name.

“Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ – a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human,” Johansson said.

Story continues below advertisement

Johansson said Altman asked her agent to reconsider the offer to work with OpenAI only two days before the May 13 demo.

After the launch, the actor said she was “forced” to hire legal representatives who sent letters to Altman and OpenAI asking for “the exact process by which they created the ‘Sky’ voice.”

“Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice,” Johansson wrote.

She said she and her lawyers are looking forward to “transparency” and will work to see that Johansson’s individual rights are protected.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” she concluded.

The launch of Sky, and its similarities to Johansson, drew widespread attention and mockery online. Even Elon Musk, who was once a board member of OpenAI but has since developed bad blood with Altman, poked fun at the Sky voice. The billionaire owner of his own AI company, called xAI, compared the incident to an episode of the popular sci-fi show Black Mirror. 

Story continues below advertisement

Johansson is far from the only celebrity to have concerns about AI and deepfakes, which are seemingly realistic, albeit fake, images, video or audio created by AI algorithms.

At the start of 2024, sexually explicit AI-generated images of Taylor Swift began circulating on X. The fake photos were shared widely and racked up tens of millions of views before they were removed.

Beyond the world of entertainment, politicians have also been oft-targeted by AI manipulations. In March, a deepfake resembling Prime Minister Justin Trudeau’s likeness was posted to YouTube promoting a financial “robot trader.” The video was removed from the platform, with Google (the owner of YouTube) calling the video a scam.

In December 2023, Canada’s cybersecurity watchdog warned that voters should be on the lookout for AI-generated images and video that would “very likely” be used to try to undermine Canadians’ faith in democracy in upcoming elections.

Outside of Canada, Italian Prime Minister Giorgia Meloni in March launched a lawsuit against two men who allegedly made pornographic deepfakes of her.

Even regular people have been targeted by deepfakes, often created as “revenge porn” or as a financial scam.


Click to play video: 'Digital scams: Unmasking AI voice scams & fake iPhones'


Digital scams: Unmasking AI voice scams & fake iPhones


Many of the leading figures in AI development and advancements, including Yoshua Bengio, the notable Canadian computer scientist, in February signed an open letter calling for more regulation around the creation of deepfakes.

Story continues below advertisement

“Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed,” the group said in the letter.

On Thursday, Jan Leik, the key safety researcher at OpenAI, left his job at the company and cited long-standing disagreements with leadership and concern about the company’s priorities.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leik wrote in a thread posted to X. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment