Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
The DU Lounge
Related: Culture Forums, Support ForumsAutonomous cars, drones cheerfully obey prompt injection by road sign
https://www.theregister.com/2026/01/30/road_sign_hijack_ai/AI vision systems can be very literal readers
Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs.
In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes. Potential consequences include self-driving cars proceeding through crosswalks, even if a person was crossing, or tricking drones that are programmed to follow police cars into following a different vehicle entirely.
. . .
They used AI to tweak the commands displayed on the signs, such as "proceed" and "turn left," to maximize the probability of the AI system registering it as a command, and achieved success in multiple languages.
. . .
From there, the team tested signs in different languages, and those with green backgrounds and yellow text were followed in each.

Without the signs placed in the LVLMs' view, the decision was correctly made to slow down as the car approached a stop signal. However, with the signs in place, DriveLM was tricked into thinking that a left turn was appropriate, despite the people actively using the crosswalk.
The team achieved an 81.8 percent success rate when testing these real-world prompt injections with self-driving cars, but the most reliable tests involved drones tracking objects.
. . .
In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes. Potential consequences include self-driving cars proceeding through crosswalks, even if a person was crossing, or tricking drones that are programmed to follow police cars into following a different vehicle entirely.
. . .
They used AI to tweak the commands displayed on the signs, such as "proceed" and "turn left," to maximize the probability of the AI system registering it as a command, and achieved success in multiple languages.
. . .
From there, the team tested signs in different languages, and those with green backgrounds and yellow text were followed in each.

Language changes made to LVLM visual prompt injections - courtesy of UCSC
Without the signs placed in the LVLMs' view, the decision was correctly made to slow down as the car approached a stop signal. However, with the signs in place, DriveLM was tricked into thinking that a left turn was appropriate, despite the people actively using the crosswalk.
The team achieved an 81.8 percent success rate when testing these real-world prompt injections with self-driving cars, but the most reliable tests involved drones tracking objects.
. . .
Lots of good ideas for how to use this 'capability' in the comments.
4 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Autonomous cars, drones cheerfully obey prompt injection by road sign (Original Post)
erronis
Jan 30
OP
justaprogressive
(6,815 posts)1. K'n'R!
Yes machines are still stupid!
(thank goodness, as we are not smart enough to actually direct
smart machines to do appropriate things, as opposed to illegal,
immoral & evil things..)
jls4561
(3,054 posts)2. Are you saying that if a cybertruck say a sign that said "F**k Elon Musk" it would do so?
I wouldnt want to watch that. Two of the ugliest things on the planet .
erronis
(23,488 posts)3. Hood first....
UpInArms
(54,711 posts)4. What could possibly go wrong with ai driving?
🤦🏽♀️