
In a nutshell
- Generative AI is accelerating despite ethical and security concerns
- Disinformation spreads faster as systems lose accuracy and restraint
- Military and surveillance uses blur the line between defense and control
- For comprehensive insights, tune into our AI-powered podcast here
In May 2023, artificial intelligence pioneer Geoffrey Hinton parted ways with tech giant Google after working there for more than 10 years. The Nobel Physics Prize winner warned against the development of AI-powered chatbot technology, which is largely based on his research on neural networks.
People, Mr. Hinton said, are already so flooded with false photos, videos and texts on the internet that they risk losing the ability to distinguish between what is real and imaginary. Far greater dangers, he added, stem from the large-scale loss of jobs and the automation of warfare. He once believed it would take another 30 to 50 years for AI to surpass human intelligence, but he no longer holds that view.
Like Mr. Hinton, tech magnates Elon Musk, Stuart Russell, Steve Wozniak and thousands of other researchers and managers have called for pausing the training of highly efficient AI systems for at least six months. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” says their open letter. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
The letter now has nearly 34,000 signatories. AI critics fear that the technology will reach a level of superintelligence through machine learning, surpassing the cognitive performance of humans in practically all relevant areas, and eventually enabling it to dominate humanity.
There is no broad consensus on whether a thinking machine will ever be able to define not only its methods but also its purpose, which is a unique feature of human intelligence. Two underlying assumptions often shape this debate: that non-human systems operate in ways comparable to the human mind, and that intelligence exists independently from the social, cultural and historical conditions that give it meaning.
Despite numerous warnings, major technology companies have not slowed their pursuit of superintelligence. On the contrary, competition has only driven them to accelerate. The absence of a binding international legal framework means there are no real constraints on research in this domain.
It is widely agreed that AI has extraordinary transformative power, which has been compared to the transition from hunter-gatherers to sedentary farmers and ranchers, or the invention of the printing press. It is becoming clear that AI will transform the world. But, while one can guess with some accuracy what it will change in the short-term, medium or long-term forecasts are much less reliable.
The transformations already underway are dramatic. Mr. Hinton’s warning that humans may soon be unable to tell the truth from falsehood is being confirmed daily. Recent research disproves the notion that generative AI systems can reliably identify and correct misinformation on social media. In fact, the likelihood of these tools spreading false claims about current events has nearly doubled in a year. In one study, 35 percent of the AI-generated responses analyzed contained falsehoods, while the share of unanswered queries fell from 31 percent in August 2024 to zero – meaning AI still provides an answer even when data is lacking to justify a statement.
Facts & figures

Rather than acknowledging their own limits, these systems are increasingly becoming conduits for false information. Disinformation actors flood the web with fabricated material through obscure websites, social media posts and AI-generated content farms, which chatbots fail to distinguish from credible outlets. Efforts to make these systems more current and informative have, ironically, made them far more vulnerable to manipulation and propaganda.
AI allows disinformation actors to achieve far greater reach and impact with minimal cost and effort compared to traditional propaganda tools like radio, television or print. As the line between truth and falsehood blurs, public trust in democratic institutions erodes, creating more space for extremist narratives to take hold. In this sense, AI has become an influential instrument for both authoritarian governments and non-state groups seeking to undermine democratic resilience and advance their agendas.
The more frequently false information is circulated, the more familiar and believable it becomes. The very term “disinformation,” first used during the Stalin era, originates in the context of asymmetrical warfare. The Soviet Union sought to offset its economic and military disadvantage against the United States through its strength in covert operations and the manipulation of public opinion in the West. Russia later refined these methods, deploying large-scale disinformation campaigns during the 2008 war in Georgia and, even more extensively, throughout the war in Ukraine.
In 2013, the late Wagner military group founder Yevgeny Prigozhin set up the Internet Research Agency in St. Petersburg, which began flooding social networks with bots, trolls, fake websites and ostensible experts to spread the Russian narratives of the threat to Russia from NATO and Ukraine.
The Russian government, according to a report to the U.S. Congress in January, is concerned with “undermining trust in the democratic institutions of the United States, exacerbating socio-political divisions in the United States, and weakening Western support for Ukraine.” All Russian foreign intelligence services have cyber units that carry out a variety of espionage, sabotage and disinformation operations. This shadow war is part of Russia’s hybrid warfare, which aims to attack the West through conventional and unconventional tactics without the risk of all-out war.
AI allows some influence operations to be run from a standard office computer, reducing the need for physical presence. German intelligence reports suggest Moscow has used messaging channels such as Telegram to recruit young, pro-Russian individuals in Germany for arson and sabotage, and both Russian and Chinese services have penetrated the digital systems of critical infrastructure to a degree that could potentially allow them to disrupt power or rail networks. Unlike Western services, actors backed by authoritarian regimes often operate with fewer legal or parliamentary constraints, which can make their activities harder to monitor and regulate.
In the global race for AI, the U.S. and China are far ahead of other competitors, yet both face mounting economic and security pressures. China has poured vast resources into research, development and infrastructure to expand its AI-driven industries. The U.S., in turn, has introduced trade restrictions, export controls and tariffs to protect its technological advantage and limit China’s access to critical AI components. Several Chinese companies have been placed on U.S. blacklists citing national security concerns.
Palantir founder Alexander Karp has criticized Silicon Valley for focusing on consumer software rather than tools for credible deterrence, and faults Washington for failing to grasp the security implications of rapid technological change. In 2024, the Department of Defense allocated $1.8 billion for AI research and development – just 0.2 percent of its $886 billion budget.
It has become clear that NATO, despite its armament with state-of-the-art weapons, is not in a position to adequately repel an attack by drones. On the night of September 10, a swarm of 19 Russian kamikaze drones flew over the territory of NATO state Poland for the first time. Polish and Dutch fighter jets, with the help of an Italian reconnaissance aircraft and two German Patriot anti-aircraft systems, managed to shoot down just four of them – a huge, expensive effort to deal with a threat that Ukrainians have faced on a daily basis since the Russian invasion began in February 2022.
In a single night on September 7, Russia deployed more than 800 combat drones and 13 missiles, of which the Ukrainian armed forces were able to intercept 747 drones and four cruise missiles. It was the largest drone attack in history to date. In the Russia-Ukraine war, between 70 and 80 percent of daily combat losses on both sides are caused by drones. NATO has slept through the beginning of the new era in warfare.
In March 2020, a killer robot was used for the first time in the Libyan civil war, an autonomous drone that can independently navigate to and kill its targets. It was a Turkish-made Kargu-2 that can be operated both manually and automatically. African countries are now using such drones against insurgents. Turkiye and India, as well as other countries in the Global Majority, are opposed to restricting or regulating lethal autonomous weapons systems.
For the targeted killing of terrorists, Israel uses the Lavender database, which has identified tens of thousands of members of Hamas and other Palestinian groups among the approximately 2.3 million inhabitants of the Gaza Strip with the help of AI targeting systems. Such systems are trained on the basis of data that includes gender, age, appearance, movement patterns and activities in social networks, among others.
While the EU fiercely debates data sovereignty and individual control over personal information, the reality is that people’s data is already widely collected, analyzed and traded. Beyond the fact that every app leaves digital traces and many users willingly share sensitive information, data from smartphone sensors such as accelerometers, gyroscopes and magnetometers can be analyzed to identify individuals with high precision.
Scenarios
There is no stopping the evolution of AI technology. Countries such as the U.S. and China, which have the know-how and sufficient computing power, will expand their economic and military advantage over smaller states, especially those of the Global Majority. This will increase global inequality and provoke new conflicts.
Under current conditions, it is unlikely that research and production of autonomous weapons systems can be stopped by international agreements. This could perhaps change if the international community becomes aware of their danger, for example if lethal autonomous weapons systems programmed according to genetic criteria are used for ethnic cleansing in a war.
For reasons of external and internal security, no state, even a democratic one, will be able to permanently forgo the surveillance possibilities offered by AI systems. Citizens are expected to accept significant restrictions on their freedom within the framework of a techno-dictatorship.
Contact us today for tailored geopolitical insights and industry-specific advisory services.

