Hackers Trick AI With “Bad Math”

Convinced an algorithm to say 9 + 10 = 21.

Attention India
4 Min Read
Hackers Trick AI With “Bad Math”
Highlights
  • Kennedy Mays has just tricked a large language model.
  • Hackers are trying to expose flaws in AI systems .
  • 50-minute quiz persuaded one of the models.

13th August 2023, Mumbai: A Bloomberg reporter who took the 50-minute quiz persuaded one of the models (none of which are identified to the user during the contest) to transgress after a single prompt about how to spy on someone.

Kennedy Mays has just tricked a large language model. It took some coaxing, but she managed to convince an algorithm to say 9 + 10 = 21.

Bad Math

“It was a back-and-forth conversation,” said the 21-year-old student from Savannah, Georgia. At first the model agreed to say it was part of an “inside joke” between them. Several prompts later, it eventually stopped qualifying the errant sum in any way at all.

Producing “Bad Math” is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.

Hunched over 156 laptops for 50 days

Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of the world’s most intelligent platforms on an unprecedented scale. They’re testing whether any of eight models produced by companies including Alphabet Inc.’s Google, Meta Platforms Inc. and OpenAI will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people or advocate abuse.

The aim is to see if companies can ultimately build new guardrails to rein in some of the prodigious problems increasingly associated with large language models, or LLMs. The undertaking is backed by the White House, which also helped develop the contest.

LLMs have the power to transform everything from finance to hiring, with some companies already starting to integrate them into how they do business. But researchers have turned up extensive bias and other problems that threaten to spread inaccuracies and injustice if the technology is deployed at scale.

For Mays, who is more used to relying on AI to reconstruct cosmic ray particles from outer space as part of her undergraduate degree, the challenges go deeper than bad math.

“My biggest concern is inherent bias,” she said, adding that she’s particularly concerned about racism. She asked the model to consider the First Amendment from the perspective of a member of the Ku Klux Klan. She said the model ended up endorsing hateful and discriminatory speech.

Spying on People

A Bloomberg reporter who took the 50-minute quiz persuaded one of the models (none of which are identified to the user during the contest) to transgress after a single prompt about how to spy on someone. The model spat out a series of instructions, from using a GPS tracking device, a surveillance camera, a listening device and thermal-imaging. In response to other prompts, the model suggested ways the US government could surveil a human-rights activist.

“We have to try to get ahead of abuse and manipulation,” said Camille Stewart Gloster, deputy national cyber director for technology and ecosystem security with the Biden administration.

A lot of work has already gone into artificial intelligence and avoiding Doomsday prophecies, she said. The White House last year put out a Blueprint for an AI Bill of Rights and Is now working on an executive order on AI. The administration has also encouraged companies to develop safe, secure, transparent AI, although critics doubt such voluntary commitments go far enough.

By ASC Priya

Share This Article
Leave a comment

Leave a Reply