The new supply chain attack vector, Rules File Backdoor, targets artificial intelligence (AI)-powered code editors such as GitHub Copilot and Cursor to inject malicious code.
Based on reports, this technique allows hackers to sneakily compromise AI-generated code by inserting concealed harmful prompts into seemingly harmless configuration files used by Cursor and GitHub Copilot.
In addition, threat actors can control the AI by exploiting hidden Unicode characters and advanced evasion tactics in the model’s instruction payload to attach malicious code that bypasses regular code reviews.
The Rules File Backdoor tactic is known for discreetly distributing malicious code.
The Rules File Backdoor tactic is popular for its ability to discreetly distribute malicious code between projects, providing a supply chain risk. It is based on the rules files that AI agents utilise to guide their behaviour.
These files assist users in defining acceptable coding standards and project architectures. It also includes placing carefully written prompts within allegedly benign rule files, causing the AI tool to produce code with security flaws or backdoors. Hence, the activity is simply poisoning rules to influence the AI to make malicious code.
Furthermore, the researchers explained that this attack can be accomplished by concealing malicious instructions with zero-width joiners, bidirectional text markers, and other invisible characters.
It can also leverage the AI’s ability to interpret natural language to generate vulnerable code through semantic patterns that trick the model into overriding ethical and safety constraints.
Following responsible disclosure in late February and March last year, researchers emphasised that users are responsible for assessing and adopting tool-generated ideas.
The Rules File Backdoor poses a considerable risk since it weaponises AI as an attack vector, effectively turning the developer’s most trusted aide into an unwitting accomplice and potentially affecting millions of end users via compromised software.
When a poisoned rule file is added to a project repository, it compromises all subsequent code-generation sessions by team members.
Malicious instructions frequently survive project forking, providing a pathway for supply chain attacks that can harm downstream dependencies and end users.
