The collaborative effort of several researchers from Cornell Tech, the Israel Institute of Technology, and Intuit has discovered an AI worm called the Morris II that could take advantage of AI-enabled email that would enable its operators to steal sensitive data and propagate efficiently.
The newly discovered malicious tool is one of the latest breeds of cyber threats, named after the infamous Morris worm that wreaked havoc on the internet in 1988. Unlike its predecessors, this worm leverages the power of AI to initiate its malicious agenda.
Moreover, it utilises an adversarial self-replicating prompt to infiltrate and compromise systems, explicitly targeting GenAI apps and email assistants powered by AI models like Gemini Pro, ChatGPT 4.0, and LLaVA.
Researchers have demonstrated two methods on how to utilise Morris II.
The researchers behind Morris II conducted thorough testing to demonstrate its capabilities, utilising two primary methods to propagate it and steal information. Through meticulously crafted email prompts embedded with malicious code, Morris II exploits bugs within AI models, forcing them into generating harmful content.
In addition, the worm is adept at bypassing security protocols and collecting confidential information, including sensitive financial details and personal identifiers, whether through text prompts or encoded within images.
The consequences of the worm’s existence are extreme since these threats are no longer theoretical speculation, showing that the danger posed by AI-enabled malware is apparent. On the other hand, the researchers have warned concerned parties about this latest discovery, urging developers and industry leaders to take proactive measures to counteract such threats.
Furthermore, the research team promptly notified tech giants Google and OpenAI, urging discussions on strengthening security measures to mitigate the risk posed by similar exploits. However, Google remained silent on the matter, unlike OpenAI, acknowledging the vulnerability.
Hence, this widely used Generative AI app pledges to fortify its systems against such attacks and advocate the adoption of robust input validation protocols.
The emergence of Morris II underscores the urgent need for vigilance due to the fast growth of cyber threats. With AI technology becoming increasingly integrated into everyday devices, workforce and services, the potential for exploitation grows significantly.
Therefore, industry stakeholders must prioritise cybersecurity measures, ensuring that GenAI-based products undergo intensive defence fortification to address these malicious campaigns.
Lastly, by dealing with these vulnerabilities early and implementing proper security protocols, organisations can fortify their digital infrastructure against the possible threats posed by AI-driven services.