Kremlin information warfare has evolved from human-driven troll farms to campaigns powered by generative artificial intelligence (AI) that are able to create deepfakes, fake online personas and synthetic news at unprecedented scale and speed. The war in Ukraine illustrates how generative AI enhances Russia’s propaganda and psychological operations, acting as a test ground for tactics later employed elsewhere. Across Europe and the Global South, generative AI-driven manipulation increasingly threatens elections and public trust by blurring the line between fabricated and authentic information. Generative AI is not inherently a problem; its misuse is. The same technology used for manipulation can also build resilience. To protect information integrity, democracies should invest in generative AI-assisted detection and analysis tools, media literacy, and pre-bunking programs, and establish coordinated monitoring systems.