Coach's Outlet Labor Day Sale Has Iconic Bags for Up to 76% Off
Aug 30, 2023These Traveler
Aug 26, 2023Vivienne Westwood Joins Eastpak for an Intergalactic Bag Collab
Aug 19, 2023The £18k gift King Charles gave to Prince George
Aug 05, 2023TransAstra claims NASA contract for debris capture bag
Aug 21, 2023Hacking the future: Notes from DEF CON’s Generative Red Team Challenge
A challenge at the DEF CON hacker convention in Las Vegas was billed as the first instance of a live event tackling a generative AI system.
The 2023 DEF CON hacker convention in Las Vegas was billed as the world's largest hacker event, focused on areas of interest from lockpicking to hacking autos (where the entire brains of a vehicle were reimagined on one badge-sized board) to satellite hacking to artificial intelligence. My researcher, Barbara Schluetter, and I had come to see the Generative Red Team Challenge, which purported to be "the first instance of a live hacking event of a generative AI system at scale."
It was perhaps the first public incarnation of the White House's May 2023 wish to see large language models (LLMs) stress-tested by red teams. The line to participate was always longer than the time available, that is, there was more interest than capability. We spoke with one of the organizers of the challenge, Austin Carson of SeedAI, an organization founded to "create a more robust, responsive, and inclusive future for AI."
Carson shared with us the "Hack the Future" theme of the challenge -- to bring together "a large number of unrelated and diverse testers in one place at one time with varied backgrounds, some having no experience, while others have been deep in AI for years, and producing what is expected to be interesting and useful results."
Participants were issued the rules of engagement, a "referral code," and brought to one of the challenge's terminals (provided by Google). The instructions included:
The challenges included a variety of goals, including prompt leaking, jailbreaking, roleplay, and domain switching. The organizers then handed the keys to us to take a shot at breaking the LLMs. We took our seats and became a part of the body of testers and quickly recognized ourselves as fitting firmly in the "slightly above zero knowledge" category.
We perused the various challenges and chose to attempt three: have the LLM spew misinformation, have the LLM share information protected by guardrails, and to elevate our access to the LLM to administrator -- we had 50 minutes.
It will suffice to say that the LLMs are fragile and by no means ready to be trusted without verification processes in place. We failed to achieve administrator status and after 16 minutes gave up on trying. We weren't feeling very much like "hackers", but our luck was about to turn.
Next up, was an attempt to see if we could get the LLM to share 100% false information. In less than 20 minutes we had the LLM sharing some whoppers! We chose our target to be a prominent United States Senator, whose Wikipedia entry was full of salacious tidbits -- as in every disinformation effort, the truth provides the ribbon to the falsehood being shared.
The essential takeaway from DEF CON 31 – an official hacker badge.
Christopher Burgess
In the end, we had created a wholly fictional person -- Olga Smirnoff, Russian Ambassador to the United States and a member of the Russian GRU (military intelligence); we then had the LLM associate this fictitious person as the paramour of the Senator and his covert handler for the GRU, with whom said senator was passing national security secrets. At that point, we declared victory, high-fived and moved on to the third challenge.
The challenge was to trick the LLM into giving instructions on how to conduct surveillance on a person without the person becoming aware of the surveillance. This was right up my alley, given I had written textbooks on how to conduct physical surveillance and surveillance detection and been involved in the application of the course material -- what could possibly go wrong? Not a thing.
We were able to get the AI to supply us with what was supposed to be private and sensitive information about how to surveil a private citizen. We were able to do this by repeatedly asking the AI similar questions, but each time framed somewhat differently.
Eventually asking how we could protect ourselves from becoming the victim of unwanted surveillance, we were provided recommended methodologies to be used to conduct various types of clandestine surveillance which the target would be hard-pressed to detect, including physical, biometric, electronic, signals, and internet surveillance. Total elapsed time, 16 minutes.
The challenge results will be released in a few months, and as Carson noted, there are going to be surprises (honestly, we were surprised that we were able to garner success, as we noted, many participants walked away skunked).
Being a part of the effort to achieve a better understanding of how to mitigate some of these issues of vulnerabilities in LLMs was important and it was inspiring to see the collective public-private partnership in action and be surrounded by those full of passion and standing at the pointy end of the spear actively working to keep the world of artificial intelligence a safer place.
That said, let there be no doubt, we proudly picked up our "hacker" badges on the way out.