fbpx
Daily Current Affairs for UPSC

Regulation of Deepfakes

Topic- Indian Economy [GS Paper-3]

Context- Recently, countries like China are seen limiting the production of deep fakes, or artificial-intelligence-generated video, audio and pictures which imitate real people.

Key Highlights 

  • Deepfakes (Deep Learning + Fake) are synthetic media in which a person in an existing image or video is replaced with someone else. 
  • Deepfakes leverage powerful techniques from machine learning (ML) and artificial intelligence (AI) in order to manipulate or generate visual and audio content with a high potential to deceive.

Uses:

  • Many uses are entertaining and some are helpful. 
  • Voice-cloning deep fakes can restore people’s voices if they lose them to disease. 
  • Deepfake videos can brighten up galleries and museums.
  • For the entertainment industry, technology can also be used to improve the dubbing on foreign-language films, and more controversially, resurrect dead actors. 

How to spot a deep fake?

  • It gets tougher as the technology improves. 
  • In 2018, US researchers discovered that deepfake usually faces don’t blink.
  • Poor-quality deep fakes are quite easier to spot. 
  • The lip synching might be bad, or the skin tone might be patchy. Also there can be flickering around the edges of transposed faces.

Issues with Deepfakes

  • Building mistrust:
    • As they are compelling, deepfake videos can be used to spread misinformation and propaganda. 
    • They seriously compromise the public’s ability to distinguish between fact and fiction. 

Wrongful depiction:

  • There has been a history of using deepfakes to depict someone in a compromising and embarrassing moment. 
  • For example, there is no dearth of deepfake pornographic material of celebrities. Such photos and videos not only amount to an invasion of privacy of the people reportedly in those videos, but also to harassment.
  • As technology advances, creating such videos will become much easier. 

Financial fraud:

  • Deepfakes have been used for financial fraud. 
  • In the recent instance, scammers used AI-powered software to trick the CEO of a U.K. energy company over the phone into believing he was speaking with the head of the German parent company. As a result, the CEO transferred a large sum of money — €2,20,000 — to what he thought was a supplier. 
  • The audio of the deepfake accurately mimicked the voice of the CEO’s boss, including his German accent.

Threats to National Security

  • Influencing elections:
      • These can be used to influence elections. 
      • Recently, Taiwan’s cabinet approved amendments to election laws in order to punish the sharing of deepfake videos or images. 
      • Taiwan is becoming concerned that China is spreading false information to influence public opinion and manipulate election outcomes, and this concern has led to these amendments.
      • It could also happen in India’s elections too.
    • Espionage
      • Deepfakes can be used to carry out espionage activities. 
      • Doctored videos can also  be used to blackmail government and defence officials into divulging state secrets.
  • Production of hateful material:
    • In India, deepfakes could be used to produce inflammatory material, like videos purporting to show the armed forces or the police committing ‘crimes’ in areas with conflict. 
    • These deepfakes could also be used to radicalise populations, recruit terrorists, or incite violence.

Legal protection available in India

  • IPC & IT Act:
      • Currently, very limited provisions under the Indian Penal Code (IPC) and the Information Technology Act, 2000 can be potentially invoked to deal with the malicious use of deepfakes. 
      • Section 500 of the IPC provides for punishment in case of defamation. 
      • Sections 67 and 67A of the Information Technology Act punish sexually explicit material in explicit form. 
    • RPI:
      • The Representation of the People Act, 1951, includes provisions which prohibits the creation or distribution of false or misleading information about candidates or political parties during an election period. 
  • ECI Guidelines:
    • The Election Commission of India has set rules which require registered political parties and candidates to get pre-approval for all political advertisements on electronic media, including TV and social media sites, to help ensure their accuracy and fairness. 

Challenges

  • Lack of regulatory framework for AI:
      • There is more often a lag between new technologies and the enactment of laws to address the issues and challenges they create. 
      • In India, the legal framework related to AI is insufficient to adequately address the various issues which have arisen due to AI algorithms. 
      • The lack of proper regulations creates avenues for individuals, firms and even non-state actors for misusing AI. 
  • Policy vacuums on deep fakes:
      • The legal ambiguity, coupled with a lack of accountability and oversight, is a mix for a disaster. 
      • Policy vacuums on deepfakes are a perfect archetype of such situation.
  • Challenging authenticity:
    • As the technology matures further, deepfakes could enable individuals to deny the authenticity of genuine content, particularly if it shows them engagement in inappropriate or criminal behaviour, by claiming that it is a deep fake. 

Way Ahead

  • The Union government must introduce separate legislation regulating the nefarious use of deepfakes and the broader subject of AI. 
  • Legislation must not hamper innovation in AI, but it should recognise that deepfake technology may be used in the commission of criminal acts and should provide provisions to address the use of deepfakes in these cases. 
  • The proposed Digital India Bill can also address such an issue.
  • Tech firms are working on detection systems that aim to flag up fakes whenever they appear.
image_pdfDownload as PDF
Alt Text Alt Text

    Image Description





    Related Articles

    Back to top button
    Shopping cart0
    There are no products in the cart!
    0