NoiseGPT app raises concerns about ‘deep chaos’

A slow shutter speed image of former US president Donald Trump is seen on a television screen in this photo in Warsaw, Poland on February 23, 2022. (Photo by STR/NurPhoto via Getty Images)

The potential of deepfakes to create suspicion, suspicion and manipulation has risen. Photo: STR/NurPhoto via Getty

A new chatbot, like ChatGPT, is able to convert text into famous voices, creating “deepfakes” in the style of Morgan Freedman, Jordan Peterson, Donald Trump and many others.

Users can even train NoiseGPT to imitate their own voice, or the voice of their friends, family members or work colleagues.

Imagine receiving a happy birthday voicemail from your favorite US president, or a voice from beyond the grave in the form of John Lennon or Elvis sharing personal information with you, that only your closest relatives know about.

This is the selling point of the newest chatbot application to be released after the November 2022 launch of the Microsoft-backed ChatGPT artificial intelligence content generator (MSFT).

NoiseGPT chief operating officer Frankie Peartree told Yahoo Finance UK: “We’re currently training the AI ​​to imitate around 25 famous voices, and soon we’ll have 100 plus famous voices to offer.”

Read more: Microsoft’s ChatGPT investment could create a ‘game-changer’ AI search engine

NoiseGPT was released on Telegram on Monday allowing users to send social media messages to friends, spoken in the voice of famous people.

Peartree said instructions on how to train the app to use your own voice will soon be available on the company’s website.

The app can be used by any smartphone that can download the Telegram social messenger application, increasing its ability to achieve mass adoption.

See: How ChatGPT could lead to ‘massive tech unemployment’ – The Crypto Mile

The ability of future AI applications to be able to imitate your own voice, or your friends’ voice, or whoever you can get a voice sample from, has raised concerns such as children receiving messages that imitate the voice of their parents.

The concept of deepfake is not technically illegal in any jurisdiction. However, the potential of deepfakes to create suspicion, skepticism and manipulation is a concern.

NoiseGPT said its app will try to hide the violations of personal and intellectual property rights that enable deepfake tech. When the user is selecting famous voices that the user wants to speak their text in, these options will be labeled “not Donald Trump” or “not Jennifer Lawrence”, to avoid violations.

Is society about to descend into deep chaos?

Peartree thinks it won’t be bad. He told Yahoo Finance UK: “I think it’s a good thing, it will cause some chaos in the beginning, but eventually we will find a balance. This was also the concern when Photoshop came out for example.”

He also said that given its legal implications, censorship risk mitigation is being taken into account in the design of the application. The application will not be stored on a centralized server, but will use decentralized blockchain-based storage.

Read more: How to master using the new AI tool ChatGPT

“Legal issues are one of the reasons we will decentralize quickly, for the training as well as the API connection, so we can’t be censored,” he said.

The decentralized nature of the new application will mean that the computational load of running the application will be shared among computers around the world, “which will run the models, the training and the API feed from people’s families”. You will be rewarded with NoiseGPT cryptocurrency tokens if you run the program from your home computer.

Peartree said: “People who create new popular voices for the app will also be rewarded in cryptocurrency.

“There is currently a 5% tax on all transactions with this cryptocurrency, but this will be removed in the future. All funds are used for development/operations, and were not team tokens and the entire supply was sold publicly.”

Legal and societal implications of deepfake technology

Being able to handle the human voice could challenge the veracity of the information we get online and through our phones and the personal communication we get on messaging apps could be called into question.

This also has implications for nation-state interaction, and the way it could be used to influence competitors and satisfy public opinion.

Policy makers are now working to mitigate the risks of deepfakes. But, current UK laws need to catch up.

These laws only cover the distribution of real images, especially in situations like revenge porn, where an ex-partner shares explicitly private and confidential material in public.

If an offender creates and shares deepfake content with the identity of their “target” in pornographic content, they can only face prosecution if they directly harass the target by sending them the content or if the offense involves infringement copyright.

Read more: What went wrong with Google’s ChatGPT rival Bard?

The wider legal and societal implications of deepfake technology may extend to:

  • Infringement of intellectual property rights — deepfake technology can be used to impersonate someone who owns intellectual property, potentially infringing their rights.

  • Violation of personal rights ⁠ — deepfakes can be used to create exploitative or pornographic content, which violates a person’s privacy and personal rights.

  • Damage to reputation ⁠— deepfakes can spread false information and damage a person’s reputation, which can have consequences for his personal and professional life.

  • Data protection and privacy compromise – deep bugs can threaten a person’s privacy and data protection, leaving them vulnerable to identity theft and other forms of cybercrime.

  • Interference with political agendas ⁠ — deepfakes can be used to manipulate public opinion, especially during times of greater political tension such as elections.

  • The spread of misinformation ⁠ — deepfakes can be used to spread false information and lead to a general lack of trust in news sources, individuals and institutions.

  • Liability concerns ⁠— the use of depthfakes in marketing and other promotional materials may lead to liability concerns if consumers are misled or misinformed.

  • Threat to national security ⁠ — deep fakes can create geopolitical tensions and threaten national security if they are used to spread false information or manipulate public opinion.

Deepfakes are becoming more and more realistic and, as technology advances, online video and audio communications may become more suspect.

Look: The reasons why UK banks are blocking crypto exchanges | The Encrypted Mile

Download the Yahoo Finance app, available for Apple and Android.

Leave a comment