AI-powered tools, deepfakes put the challenge of fake information before internet users

Artificial intelligence, deepfakes and social media… little understood by ordinary people, the combination of the three poses a strange obstacle to millions of internet users caught in the daily battle of trying to filter the real from the fake.

The fight against fake information has always been challenging and is becoming more difficult as the development of AI-powered tools has made detecting deepfakes on many social networks more difficult. AI’s unintended ability to create fake news – faster than stopping it – has troubling consequences.

“In India’s rapidly changing information system, deepfakes have emerged as the new frontier of disinformation, making it difficult for people to distinguish between fake and authentic information,” said Syed Nazakat, founder and CEO of DataLEADS, a digital media group that builds information literacy and infodemic management plans, tell them The PTI.

Read | Use AI to improve services, investigations, monitoring: Parliamentary Panels

India is already fighting a flood of misinformation about various Indic languages. This will get worse with different AI bots and tools driving deepfakes online.

“The next generation of AI models, called Generative AI — for example, Dall-e, ChatGP0T, Meta’s Make-A-Video etc — don’t need a source to evolve. Instead, they can generate an image, text or video based on the notification. These are still in the early stages of development, but one can see the potential for harm as we would not have the original content to use as evidence,” added Azahar Machwe, who worked as an AI business architect at British Telecom.

What is a deepfake?

Deepfakes are photos and videos that replace one person’s face with another’s. Many AI tools are available to internet users on their smartphones almost for free.

In its simplest form, AI can be defined as using computers to do things that require human intelligence. A notable example would be the ongoing competition between Microsoft’s ChatGPT and Google’s BARD.

While both AI tools automate the creation of human-level writing, the difference is that BARD uses Google’s Language Model for Conversational Applications (LaMDA) and can provide answers based on real-time and current research pulled from the Internet. ChatGPT uses its Generative Pre-training Transformer 3 (GPT-3) model, trained on data before the end of 2021.

Recent examples

Two artificial videos and a digitally altered screenshot of a Hindi newspaper report shared last week on social media, including Twitter and Facebook, highlighted the unintended effects of AI tools in creating altered images and videos fished with misleading or false claims.

Artificial video is any video produced with AI without cameras, actors, and other physical elements.

Read | Will there be a refresh of the reading machine?

A video of Microsoft founder Bill Gates being cornered by a reporter in an interview was shared as genuine and later found to be edited. A digitally altered video of US President Joe Biden calling for a national draft (mandatory registration of individuals in the armed forces) to fight the war in Ukraine has been shared as authentic. In one case, a photo edited to make it look like a Hindi newspaper report was widely circulated to spread falsehoods about migrant workers in Tamil Nadu.

All three instances – two fake videos and a digitally altered screenshot of a Hindi newspaper report – were shared on social media by thousands of netizens who thought they were real.

These issues became increasingly newsworthy on social media and mainstream media, highlighting the unintended consequences of AI tools in creating doctored images and videos with misleading or false claims.

The PTIThe Fact-Checking Team looked at the three claims and disproved them as ‘deep stuff’ and ‘digitally edited’ using powerful AI tools readily available online.

AI and fake news

In the last few years, the introduction of AI in journalism has raised the prospect of revolutionizing the industry and the production and distribution of news. It was also seen as an effective way to curb the spread of fake news and misinformation.

“The weakness of deepfakes is that they need original content to work with. For example, the Bill Gates video covered the original audio with that fake. These videos are easy to clean if they are known to be original, but this takes time and the ability to search for the original content,” said Azahar. . The PTI.

Read | How AI can improve the world more than electricity or the internet

He believes that the deepfakes shared recently on social media are easy to track but was also worried that releasing such fake videos will be a challenge in the coming days.

“Converting the original video can lead to defects (eg light/shadow disparity) that AI models can be trained to recognize. These resulting videos are often of low quality to hide these problems from algorithms (and humans),” he explained.

According to him, fake news is floating in many ways and deepfakes are created with basic AI-powered tools these days. These videos are easy to edit.

“But there can never be 100 percent accuracy. Intel’s version, for example, promises 96 percent accuracy, which means 4 out of 100 will still succeed,” he added.

Way forward

Many social media platforms claim to reduce the spread of misinformation from a source by developing fake news detection algorithms based on language patterns and crowdsourcing. This ensures that false information is not allowed to spread rather than being discovered after the fact and removed.

While the examples of deepfakes highlight the potential threats of AI in producing fake news, AI and machine learning have provided journalism with a number of workflow tools that help generate content from voice-aware transcription tools automatically.

“AI continues to help journalists focus their efforts on developing quality content as technology ensures timely and rapid content distribution. Human-in-the-loop will be required to check the consistency and authenticity of content shared in any format – text, image, video, audio etc.,” Azahar said.

Deepfakes should be clearly labeled as ‘artificially produced’ in India, which had over 700 million smartphone users (aged two and above) in 2021. A recent Nielsen report says rural India has more than 42.5 million internet users, 44 percent more than the 29.5 million internet users in urban India.

Read | Gods in the machine? The rise of artificial intelligence may give rise to new religions

“People tend to associate with an ‘echo chamber’ of like-minded people. We need to embed a media literacy curriculum and a critical thinking curriculum in basic education to raise awareness and create a faster way to help people protect themselves from misinformation.

“We need a multi-pronged, multi-disciplinary approach across India to prepare people of all ages to see the digital landscape of today and tomorrow to guard against deepfakes and misinformation,” said Nazakat.

In a country as large as India, the changing information landscape creates an even greater need for multilingual literacy skills. He added that all educational institutions should prioritize literacy in the next ten years.

Source link

Leave a Reply

Scroll to Top
%d bloggers like this: