Context
The summit on Responsible Use of Artificial Intelligence in the Military Domain (REAIM) has begun in Seoul, South Korea.
About
- It is a part of the new global diplomacy to form global norms at the military programs of AI.
- The summit is being co-hosted by Kenya, the Netherlands, Singapore, and the UK.
- This is the second one new launch of the summit; the primary took place in 2023 in The Hague, Netherlands.
- The three-fold goal of the summit is to:
- recognize the results of military AI on international peace and security,
- put into effect new norms on the usage of AI systems in military affairs,
- and expand ideas on long-term international governance of AI within the military domain.
Artificial Intelligence
- Artificial intelligence (AI) is a wide-ranging department of computer science concerned with building smart machines able to perform tasks that commonly require human intelligence.
- Artificial intelligence lets in machines to version, or maybe enhance upon, the capabilities of the human mind.
- And from the improvement of self-driving vehicles to the proliferation of generative AI tools like ChatGPT and Google’s Bard, AI is an increasing number was part of ordinary lifestyles — and a place every industry is investing in.
Application of AI in Military
- While AI has long been used by leading militaries for stock control and logistical making plans, in the past few years, using AI in intelligence, surveillance, and reconnaissance of the warfield has notably elevated.
- Major militaries see the potential of AI to convert the gathering, synthesis, and analysis of big amounts of statistics from the warfield.
- It may be beneficial in raising situational consciousness, growing the time available for decision-making on the use of force, enhancing precision in focus, restricting civilian casualties, and increasing the pace of struggle.
- Many critics have warned that those presumed points of interest of AI in war are probably illusory and perilous.
- The proliferation of the so-called AI decision-making support systems (AI-DSS) and their implications are among the problems which can be now being debated underneath the REAIM process.
Need for the Regulation
- The fear that the conduct of struggle would be taken up through computer systems and algorithms had generated requires controlling these weapons.
- Keeping people within the decision-making loop on the use of force has been a main goal of this discourse.
- The issues relating to lethal self reliant weapon structures (LAWS) were discussed within a set of governmental professionals since 2019 at the United Nations in Geneva.
- The REAIM procedure widened the talk past ‘killer robots’ to a broader variety of issues by recognising that AI systems are locating ever more programs in war.
Responsible use of AI in Military Affairs
- The REAIM process is one of the many tasks to promote accountable AI — national, bilateral, plurilateral, and multilateral.
- The US has also recommended its NATO allies to undertake comparable norms.
- NATO’s 2021 strategy identified six standards for the responsible military use of AI and unveiled a fixed set of pointers for its forces.
- The goal is to “boost up” using AI structures that might generate military gains for NATO, but in a “secure and responsible” manner.
- The world is going to see more AI in warfare than less; that comports with the historical trend that each new technology will eventually locate military programs.
- The REAIM system recognises this — and given the potentially catastrophic consequences from such use, the concept is to develop an agreed set of norms.
Source: Indian Express
Post Views: 45