AI Strategies in Complex Geopolitics

By Dr. Sachin Tiwari

Published on November 1, 2024


The arrival of AI assistants from 2010 onwards transformed the niche computer technology limited to labs, and academics to brought it to public consciousness. During the same period, leaders also deliberated on the potential risks and threats to national security. This is quantified in the launch of a number of initiatives and policies for AI. According to OECD, there are over 1000 policy initiatives across 69 countries and the EU. Such strategies and initiatives put AI advances on priority at the highest political level that sets its course development. A key driver pushing states to shape AI is geopolitics, primarily the U.S.-China competition that is not only shaping security issues but also economic and technological challenges. There is an emerging drive among countries to avoid reliance of any one or a few powers and better manage the risks emanating from these technologies. States increasingly aspire for self-reliance and intend to practice strategic autonomy to achieve economic growth fuelled by the emerging technologies.

 Among the notable players shaping AI debates is the European Union (EU). The EU deliberated on the implications of digital technology with a first-mover advantage to shape the debates. What has developed is a series of regulations that shape the tech environment in the EU based on its focus on human centric approach and maintaining bloc’s position as a leading tech area  The EU AI Act has adopted a cautious stand that is based on a pre-evaluation model of reducing risks that emanates from AI. The risk-based approach evaluates the risks from unacceptable risk (social credit scoring, and facial technologies in public space), high risk (AI in job services), to limited risk (chatbot). Another important aspect is its extraterritorial nature, similar to data protection regulation (GDPR) i.e. companies that operate or conduct business with EU member states, though outside the union are subject to the law. However, strict regulations have raised concerns over economic security, with the U.S. based big tech firms such as Google that are in a dominant position and pose competition to the companies in the EU. This stand is exemplified in the adoption of Digital Services and the Digital Market Act that imposes strict penalties on tech companies. Several EU firms have raised calls for some adjustment to AI act, citing competition from other players   

In contrast to the EU, the United States has a policy described as a mosaic of individual agency approaches than an overarching law. It relies on balancing innovation with minimal oversight by the government. It has resulted in adoption of a responsible AI framework by large tech firms. For instance, Meta, and Google have respective approaches to develop AI. However, there has been dissatisfaction with the increasing harm and the ability of companies to shape it. The Biden administration launched an AI bill of rights in 2023 outlining the guidelines for the development of responsible AI with focus on developers. However, lack of a federal regulation has pushed for more regulation across several states with proposals for tougher stand on regulating AI The lack of measures have been exacerbated with debates over stifling innovation with the competition from China. The Australia Strategic Policy Institute (ASPI) tracker shows China leading across 37 out of 44 categories in emerging technologies. This has also led to the Department of Defense having a strategy on AI, that calls for the integration of AI into defense.  

The EU and US are not the only leading states in AI regulation. China joined early on to shape AI development in competition with the United States. The first AI plan appeared in 2017 . It was a part of the larger Make in China 2025 plan that aimed at making China a leader in the development of critical technology sector. China’s government has proceeded to balance innovation necessities with domestic control under the label of AI sovereignty to regulate AI.  The guidelines focused on the ideological and political implications from algorithms with directions for content control based on socialist values i.e. party’s ideology. Another aspect is more data control with the establishment of new agencies such as the Bureau of Data to leverage use of massive domestic data. As scholars have identified, the strict regulation have been modified to allow for innovation. The scope of the directives includes narrower aspects such as algorithm, deep fakes rather than AI as a whole. This imperative is illustrative in the difference between AI regulations for public use and for enterprise, research and public institutions to leverage AI.

In contrast to dominant approaches, India has approached with a light-touch approach to regulate AI that can be considered as a hybrid model, focusing on mitigating the harmful effects while boosting the nascent AI related industry. What has developed is a matrix of program that supports the initiatives under the Sovereign AI i.e. building an indigenous tech ecosystem with greater control. The adoption of Digital Public Infrastructure (DPIs) like India Stack provides the bedrock for building AI-first systems from the ground up. At the same time, risks are managed on an ad-hoc basis, as in case of Algorithms such as on under-tested and unreliable AI platforms. A major focus is on using AI across key sectors including agriculture, healthcare, financial access and education. It differs from approaches of the US and China with a focus on development of foundational models.

Along with national approaches, at the multilateral level, there have been several efforts, with G7, OCED, G20, UN, UK AI summit have proceeding to regulate AI. However, these efforts have been limited with differences between key players to shape the global norms. For instance, while the U.S. has advocated for the development of safe secure AI based on trust level., China has advocated for “beneficial, safe and fair” systems. The divergence is based on increasing export control by the US limiting the supply of sensitive technology to China. While both powers have their unique paths, India’s approach can be resonated across developing countries' focus on developmental issues that impact the Global South. 

These different models reflect the domestic priorities and the challenge of navigating emerging geopolitics. There are several examples, where the government are evaluating the impact of AI on their development and security. Calculations across several indexes, suggests an emerging technological parity between the US and China, along with reports on massive impact of jobs, and social stability. While the US-China stiff competition has over spilled into the AI sector, other countries focus on self-reliance in form of technological sovereignty.


*The Author is a Research Fellow at the Kalinga Institute of Indo-Pacific Studies (KIIPS).

Disclaimer: The Views expressed in the article are those of the Author

Image credits: Foreign Policy Magazine