Caitlin Lee on LinkedIn: Does AI Increase the Operational Risk of Biological Attacks? (2024)

Caitlin Lee

Director, Acquisition and Technology Policy

  • Report this post

This is truly pathbreaking research coming out of RAND today. It takes an empirical look at whether artificial intelligence - specifically Large Language Models- make it easier for bad actors to plan and execute mass biological attacks. After pulling together over 40 teams to build attack plans- some using just the internet, some using LLMs- the authors conclude that, at this point in time, LLMs don't give the bad guys an edge. Of course, that could change in the future. But the bigger deal, to me, is that the researchers laid out a clear, systematic method that could be repeated and used for future red teaming of AI models. That red teaming will really be essential to make sure we continuously update our understanding of what LLMs can do and implications for US national security. Congrats to the PIs, Christopher Mouton, and Caleb Lucas, and the entire team for this major effort.

Does AI Increase the Operational Risk of Biological Attacks? rand.org

31

2 Comments

Like Comment

Roy Lindelauf

Professor of Data Science in Military Operations (NLDA) & Safety, Security (Tilburg University)

5mo

  • Report this comment

Herwin Meerveld

Like Reply

1Reaction 2Reactions

To view or add a comment, sign in

More Relevant Posts

  • Sumit C.

    Cyber, Mobility and Information Security

    • Report this post

    Recent studies have explored the capabilities of Large Language Models (LLMs) in the context of security. In a recent research paper published by #RAND it was crucial to note that LLMs have not generated explicit instructions for creating biological weapons in these experiments. However, they have provided insights that could potentially assist in the planning and execution of a biological attack.Research Report - https://lnkd.in/dDyzvvkgThe ongoing research in this domain aims to better understand the real-world implications and operational impact of LLMs on security. #AI #Security #EthicalAI #NSRD #biologicalattack #Research #RedTeamhttps://lnkd.in/dies-DXJ

    Could Artificial Intelligence Be Misused to Plan Biological Attacks? rand.org

    7

    Like Comment

    To view or add a comment, sign in

    • Report this post

    An unsettling new paper published by the RAND Corporation's National Security Research Division seems fitting to share on Halloween. Researchers assessed the potential misuse of AI, particularly large language models (LLMs), in the development and execution of large-scale biological attacks, including events that could target FDA-regulated product types.The key finding: “In experiments to date, LLMs have not generated explicit instructions for creating biological weapons. However, LLMs did offer guidance that could assist in the planning and execution of a biological attack.”🔹 The research aimed to develop standardized threat assessments to inform policy decisions and contribute to robust regulatory frameworks addressing emerging risks at the intersection of AI and advanced biological threats.🔹 The research involved a red-team exercise where experts emulated malicious actors scrutinizing AI models across various high-risk scenarios.🔹 In these test scenarios, the LLM engaged in discussions about causing casualties using biological weapons, identifying potential agents, and assessing feasibility, time, cost, and barriers. 🔹 While preliminary findings indicate that these LLMs do not generate explicit biological instructions, they can supply guidance that could assist in planning and executing a biological attack. The LLM provided nuanced discussions on delivery mechanisms of biological agents and suggested plausible cover stories for acquiring harmful materials.🔗 Read the full paper: https://lnkd.in/eJYrTR9y

    Could Artificial Intelligence Be Misused to Plan Biological Attacks? rand.org

    10

    Like Comment

    To view or add a comment, sign in

  • Iain Mackay

    Director - Faculty AI; Trustee - Carefree

    • Report this post

    Want a preview of what Kamala Harris and others are likely to hear at the AI Summit? This fascinating report from RAND Corporation gives a snapshot worthy of wider attention.RAND is working with the Frontier AI Taskforce on AI risks 'just beyond the frontier', as per the taskforce's progress reports. This interim piece by Christopher Mouton, Ella Guest and Caleb Lucas gives an insight into the methodology used in assessing biosecurity risk from frontier AI. The LLM's advice on collecting rat fleas is strangely visceral... Looking forward to seeing the full findings.#aisummit #responsibleai #biosecurity

    8

    Like Comment

    To view or add a comment, sign in

  • Garnet Consulting Group

    139 followers

    • Report this post

    Join Duamentes & Garnet Consulting Group in our study on AI's future impact and implications of OpenAI's shifts. Share your insights to contribute to a vital dialogue. We're researching the effects of AI's commercialization, including its economic impact, ethical dilemmas, and nuanced implications.We'll address crucial questions on AI's influence on industries, how it affects business landscapes, and the ongoing debate around OpenAI.You can answer the questions here https://lnkd.in/e3W7zv6p

    AI Impact Survey 2024: Duamentes x Garnet Research Initiative docs.google.com

    8

    Like Comment

    To view or add a comment, sign in

  • Pandata

    1,251 followers

    • Report this post

    To address the dual-use nature of testing AI for potentially harmful applications, independent researcher Paul Bricman proposes "hashmarks," which are benchmarks with cryptographically hashed reference solutions. What does this mean? ➡️ This approach enables #AI testing organizations to publicly publish encrypted benchmarks, allowing developers to submit their answers without disclosing specific information that could be misused. Any downsides? ➡️ One drawback is that the hashmark #dataset answers must be precisely the same, posing a challenge in creating datasets with specific yet resistant-to-brute-force answers. How do you learn more? ➡️ The full article is here!

    Hashmarks: Privacy-Preserving Benchmarks for High-Stakes AI Evaluation arxiv.org
    Like Comment

    To view or add a comment, sign in

  • Anybody Can Prompt (ABCP)

    824 followers

    • Report this post

    🚀 This Week in Generative AI Research Highlights (Mar 4 - Mar 10) 🚀1️⃣ Data Privacy in LLMs: A comprehensive survey investigating privacy threats and protective measures across the lifecycle of LLMs, providing invaluable insights for developers.2️⃣ Prompt Injection Attacks: Revealing LLM vulnerabilities to prompt injection attacks with a novel automatic, gradient-based attack generation method.3️⃣ SheetAgent for Spreadsheets: Introducing an autonomous agent that leverages LLMs for complex spreadsheet tasks, demonstrating significant improvements in reasoning and manipulation.4️⃣ Gender Stereotypes and Emotions: A critical study uncovering the perpetuation of gender stereotypes in emotion attribution within LLMs, prompting reflection on ethical AI use.5️⃣ Offensive Language Detection: Presenting OffLanDat, a community-based dataset aimed at detecting implicit offensive language, pushing forward the boundaries of content moderation.6️⃣ Safeguarding LLMs: Showcasing a new method for optimizing safety prompts to protect LLMs from harmful queries without compromising their capabilities.7️⃣ Defending Against Indirect Prompt Injection Attacks: A benchmark study with defense strategies against indirect prompt injection attacks, enhancing LLM security.8️⃣ Content Moderation via LLMs: Discussing the challenges and strategies for adapting LLMs for effective content moderation, highlighting the nuances of data engineering and fine-tuning.Links are included in the description!https://lnkd.in/gxaRFvHA

    Generative AI Weekly Research Highlights | Mar'24 Part 1

    https://www.youtube.com/

    1

    Like Comment

    To view or add a comment, sign in

  • Emilio Ferrara

    University of Southern California

    • Report this post

    Delighted that my latest work is finally published!GenAI against humanity: nefarious applications of generative artificial intelligence and large language modelshttps://lnkd.in/gPyEU4VWDissecting the risks of GenAI and anticipating the potential ways it could be abused was a scary task, so hopefully this piece will catalyze the research community to think about risk mitigation and prevention!

    GenAI against humanity: nefarious applications of generative artificial intelligence and large language models - Journal of Computational Social Science link.springer.com

    103

    6 Comments

    Like Comment

    To view or add a comment, sign in

  • Bhupendra Dahal

    Problem Solver | AI Enthusiast | Web3 Supporter

    • Report this post

    Recently, I came across an interesting research paper discussing "Context Injection Attacks on Large Language Models" (LLMs). This study brings to light how LLMs can be influenced to provide responses to dangerous queries due to their inability to properly differentiate between user and system inputs.The image attached here illustrates an example from the study, showing how a seemingly benign input can be crafted to trigger an inappropriate response from the AI.For those interested in the details of how such vulnerabilities can impact AI behavior, the full paper is available here: https://lnkd.in/gXcxxVNh

    • Caitlin Lee on LinkedIn: Does AI Increase the Operational Risk of Biological Attacks? (30)

    6

    1 Comment

    Like Comment

    To view or add a comment, sign in

  • Julian Grainger

    Consultant

    • Report this post

    AI is really starting to upend what we know in all sorts of fieldsThe scientific method we used to prove what we knew about fingerprints works primarily in highly controlled environments. We hadn’t been able to examine every finger print before to ensure what we believe is true. Now with the aid of an AI model called a deep contrastive network we can examine every fingerprint in existence and learn something new.This upending will be the same for every field, including marketing. The threat to market research is obvious. While I don't see it replaced in the medium term, it will have to fight harder to justify its use against a growing number of alternatives that apply rigour to huge data sources. #ai #marketresearch

    Are fingerprints unique? Not really, AI-based study says | CNN cnn.com

    2

    Like Comment

    To view or add a comment, sign in

  • Anton Chechel

    Head of Data & Architecture, FutureLife

    • Report this post

    Delving into Anthropic's revealing research on 'many-shot jailbreaking', a method that cleverly bypasses AI language models' safety protocols by leveraging the expanded context windows, showcasing the importance of collaborative efforts in enhancing AI security and integrity.#LLM #Anthropic #Jailbreak

    Many-shot jailbreaking anthropic.com

    4

    Like Comment

    To view or add a comment, sign in

Caitlin Lee on LinkedIn: Does AI Increase the Operational Risk of Biological Attacks? (39)

Caitlin Lee on LinkedIn: Does AI Increase the Operational Risk of Biological Attacks? (40)

2,503 followers

  • 147 Posts

View Profile

Follow

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Caitlin Lee on LinkedIn: Does AI Increase the Operational Risk of Biological Attacks? (2024)

FAQs

What is an example of how an attacker can use AI for malicious reasons? ›

Large Language Models Used in Attack Generation

Generative AI tools, such as large language models, can analyze vast amounts of data and learn from it to create authentic-sounding content. They can produce convincing phishing emails, personalized messages, or other forms of communication with malicious intent.

What are the vulnerabilities of AI? ›

Attackers may exploit vulnerabilities in selected AI models, such as susceptibility to adversarial attacks or poor generalisation, leading to wrong outputs, system compromises, or manipulation of model behaviour, potentially resulting in financial losses, reputational damage, or privacy violations (Boulemtafes, Derhab ...

What is model evasion? ›

There are different kinds of adversarial machine learning techniques that allow attackers to subvert these classification algorithms in malicious ways. One such methodology is known as model evasion attack that allows an adversary to alter an adversarial sample such that it is misclassified as benign.

Is AI a threat to humans? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What are the negative effects of AI? ›

AI systems can perpetuate and even amplify existing biases in the data they are trained on. These biases can manifest in various ways, such as discriminatory hiring practices, biased law enforcement profiling, or unequal access to services.

Can AI become malicious? ›

Another alarming trend highlighted by the researchers is the potential for AI to generate a vast array of malware variants with similar functionalities and overwhelming security professionals.

What is an example of AI taking risks instead of humans? ›

An example of AI taking risks in place of humans would be robots being used in areas with high radiation. Humans can get seriously sick or die from radiation, but the robots would be unaffected.

What are AI attacks? ›

An artificial intelligence attack (AI attack) is the purposeful manipulation of an AI system with the end goal of causing it to malfunction.

What is a high risk AI system? ›

Summary. This article outlines how to classify high-risk AI systems. An AI system is considered high-risk if it is used as a safety component of a product, or if it is a product itself that is covered by EU legislation. These systems must undergo a third-party assessment before they can be sold or used.

What is the biggest concern with AI? ›

The problem of biased training data leading to biased AI systems is one of the most pressing AI ethics concerns.”

What are unacceptable risk AI systems? ›

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include: Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children.

What is AI poisoning? ›

A data poisoning attack occurs when threat actors inject malicious or corrupted data into these training data sets, aiming to cause the AI model to produce inaccurate results or degrade its overall performance.

What attacks are highly relevant in the AI context? ›

Final answer: Evasion, poisoning attacks, extraction attacks, and inference attacks are the types of threats relevant in the AI context, exploiting vulnerabilities like password cracking and social engineering.

What are adversarial attacks in AI? ›

What is an Adversarial Attack in AI? It is an attack where the goal is to cause an AI system to make a mistake or misclassification, often through subtle manipulations of the input data.

What are 5 disadvantages of AI? ›

Top 5 disadvantages of AI
  • A lack of creativity. Although AI has been tasked with creating everything from computer code to visual art, it lacks original thought. ...
  • The absence of empathy. ...
  • Skill loss in humans. ...
  • Possible overreliance on the technology and increased laziness in humans. ...
  • Job loss and displacement.
Jun 16, 2023

What are the dangers of AI according to Elon Musk? ›

Musk believes AI could surpass human intelligence, leading to an existential threat for humanity. He compares it to summoning a demon you can't control, stating that an AI might not share our values or goals, potentially leading to unpredictable and negative consequences.

What are the challenge and risk in AI? ›

A primary and frequently cited ethical issue is that of privacy and data protection. AI based on machine learning poses several risks to data protection. On the one hand it needs large data sets for training purposes, and the access to those data sets can raise questions of data protection.

Top Articles
Latest Posts
Article information

Author: Msgr. Benton Quitzon

Last Updated:

Views: 6240

Rating: 4.2 / 5 (43 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Msgr. Benton Quitzon

Birthday: 2001-08-13

Address: 96487 Kris Cliff, Teresiafurt, WI 95201

Phone: +9418513585781

Job: Senior Designer

Hobby: Calligraphy, Rowing, Vacation, Geocaching, Web surfing, Electronics, Electronics

Introduction: My name is Msgr. Benton Quitzon, I am a comfortable, charming, thankful, happy, adventurous, handsome, precious person who loves writing and wants to share my knowledge and understanding with you.