Google also identified malware families using AI-generated decoy code to evade detection. Some malware samples reportedly included inactive blocks of AI-written filler code designed to make malicious software appear legitimate to security scanners.
Another major concern highlighted in the report involves AI-enabled malware capable of autonomous decision-making. Google analyzed an Android backdoor named PROMPTSPY that reportedly uses Google’s Gemini API to interpret smartphone interfaces, generate commands, and simulate user actions without direct human control.
According to GTIG, the malware could analyze on-screen elements, perform clicks and swipes, and even replay
authenticationgestures such as PIN patterns to maintain access to infected devices.
Google said attackers are also increasingly building systems that provide large-scale, anonymous access to premium AI models using automated account creation tools, proxy services, and API aggregation platforms.
The company emphasized that AI is becoming both a weapon and a target. Threat actors have started attacking AI software supply chains by compromising AI-related code packages, integrations, and developer tools.