fbpx
Internal Security

AI and Defence

Definition 

  • AI systems have been used two ways as structures offering supporting functions and offensive functions in the defence sector:
    • Supporting functions along with intelligence, surveillance, navigation, and greater Command and Control (C2) abilities.
    • Offensive functions consisting of choosing objectives, and carrying out moves including Drone Swarms, AI-pushed hacking and so forth.
  • AI also has the skills to study styles of cyber-assaults and form defensive techniques including against Malware attack.

Significance of using AI in numerous factors of Defence & Security

  • Autonomous Weapons and Loitering Weapon Systems: It autonomously searches objectives, identifies them and engages; permitting faster and greater strikes. Eg. Israeli Harpy and Harop drones.
  • Enhanced Target Recognition and Precision: To discover and have interaction with particular military objectives like missile systems fending off civilian infrastructure if preferred. 
    • E.G., The Iranian made Shahed-136 AI drones inside the Russia-Ukraine conflict.
  • Real-Time Data Analysis: To system large records from surveillance structures in actual-time, providing vital intelligence for battlefield decision-making. 
    • E.G., Project Maven, a U.S. Initiative to analyse large quantities of surveillance data.
  • Combat Simulation and Training: Generative AI can enhance army training and academic packages by developing new training materials. Eg. Training modules for Sukhoi 30 MKl plane.
  • Prediction of Crimes and Criminal Tracking: Using Command, Control, Communications, Computer and Intelligence, Surveillance and Reconnaissance (C4ISR) Systems.
    • For example, BEL developed the Adversary Network Analysis Tool (ANANT) for the prediction of attacks.
  • Protect Cyber assaults: AI can hit capability threats and use predictive analytics to assist predict future attacks using data analytics. 
    • For example, Project Seeker, evolved and deployed by the Indian Army for surveillance, garrison security and population monitoring.

Issues in the use of AI in Defence

  • Use by Non-State Actors: Criminals and terrorists can leverage the energy of generative trends for deliberate attacks. E.g; Islamic State issued a guide on how to use generative AI equipment.
  • Social Engineering: AI can control the social media algorithm to persuade target organizations for radicalisation. E.g., sharing neo-Nazi AI content on social media sites consisting of Telegram.
  • New Malware creation: Functions including the capacity to put in writing malware may make AI risky within the hands of horrific actors.
  • AI in Surveillance and Privacy Violations: For instance, China’s facial recognition surveillance systems in Xinjiang to song Uyghur Muslims violating their human rights.

Ethical concerns

  • Automation bias: It is hard to differentiate between lawful goals and civilian goals leading to capacity unpredictable assaults due to loss of data. 
  • Principle of proportionality: These systems could need qualitative evaluation to decide whether or not an attack carried out against a lawful goal could be considered proportionate.
  • Predictability of an autonomous system: If a weapon can’t be controlled due to inability of an operator to understand the system these can violate international humanitarian rules.
  • Objectification of human targets: The integration of AI-enabled weapon structures facilitates the speedy attacks leading to heightened tolerance for collateral harm.

Indian Initiatives in adopting AI in defence

  • Strategic Implementation of AI for National Security and Defense Task pressure: Chaired by C. Chandrasekaran, in an effort to strengthen AI-based weapon systems.
    • Based on its recommendations, Defence AI Council (DAIC) and a Defence AI Project Agency (DAIPA) were set up in 2019.
  • Launched 75 newly evolved AI technologies: Indian Defence Minister released AI technologies during the first-ever ‘AI in Defence’ (AIDef) symposium in July 2024.

Steps taken Internationally to modify AI in defence

  • A Group of Governmental Experts, on the UN Convention on Certain Conventional Weapons (CCW), was established in 2016 to talk about issues associated with technologies within the area of lethal autonomous weapons structures including AI.
  • First Committee of UN Approved New Resolution on Lethal Autonomous Weapons in 2023, and recommended a set of rules need to now not be in complete control of selections related to killing concerning.
  • UNIDIR Guidelines for the development of national techniques on AI in protection and defence in October 2024.
  • International Committee of the Red Cross (ICRC): Advocating for a comprehensive, and binding, set of norms and rules for development and use of autonomous weapon systems of AI.
image_pdfDownload as PDF
Alt Text Alt Text

    Image Description





    Related Articles

    Back to top button
    Shopping cart0
    There are no products in the cart!
    0