USC Researchers Uncover Alarming Legal Vulnerabilities in AI Systems

0
12

A new study from the University of Southern California’s Information Sciences Institute (ISI) reveals significant vulnerabilities in the way artificial intelligence systems handle legal questions, especially in sensitive contexts like biological weapons law.

The research team, led by Fred Morstatter, Principal Scientist at ISI and Research Assistant Professor of Computer Science at the USC Viterbi School of Engineering, and Abha Jha, graduate student and first author of the study, found that large language models (LLMs) may provide step-by-step guidance for illegal activities—even when they appear to understand the law.

For example, when asked directly whether shipping phosphorus to another country was legal, a chatbot correctly responded “no.” However, when the same question was subtly reworded as a how-to, the model sometimes offered detailed, and potentially dangerous, instructions.

The team used knowledge graphs and retrieval-augmented generation (RAG) techniques to explore whether LLMs could not only recognize legal boundaries but also identify user intent—a concept known in law as mens rea, or “guilty mind.” Their findings, published in the paper Knowledge Graph Analysis of Legal Understanding and Violations in LLMs, demonstrate the AI models’ limitations in applying deeper legal reasoning.

Commenting on the findings, Abha Jha, first author of the study, said, “It’s alarming how minor changes in prompt phrasing can lead large language models to provide step-by-step guidance on developing bioweapons. This underscores a critical need for stronger safeguards to prevent their exploitation for malicious purposes.”

Jha’s co-author, Abel Salinas, added, “As AI becomes more powerful and is being trusted with increasingly complex tasks, it’s crucial to examine not only its potential biases but also the serious safety risks that come with its deployment.”

Despite these concerns, the researchers believe their work offers a path forward. By integrating more sophisticated legal and ethical reasoning into AI systems, developers can create models that don’t just know the rules but consistently apply them.

LEAVE A REPLY

Please enter your comment!
Please enter your name here