COURSE

AI for red teams

Course

The course links offense to operations. AI risk is not just another web app risk. Models can generalize and hallucinate, retrieval chains blend internal and external data, and tool use lets models perform actions, multiplying impact. Red teams must model this system boundary, then pressure test across prompts, retrieval, tools, and supply chains.

Full access included with 
Insider Pro
 and 
Teams

2

H

23

M
Time

Intermediate

i
Designed for learners who have no prior work experience in IT or Cybersecurity, but are interested in starting a career in this exciting field.
Designed for learners with prior cybersecurity work experience who are interested in advancing their career or expanding their skillset.
Designed for learners with a solid grasp of foundational IT and cybersecurity concepts who are interested in pursuing an entry-level security role.
Experience Level

87

Enrollees

1460

XP

2

i

Earn qualifying credits for certification renewal with completion certificates provided for submission.
CEU's

Learners at 96% of Fortune 1000 companies trust Cybrary

About this course

Read More

Skills you'll gain

Course Outline

1
Module 1: AI for Red Teams
2
H
23
Min

1.1 Course Overview

Free

20 XP

H

5m

1.2 Threat Modeling with ATLAS and OWASP Top 10 for LLM Applications (v2.0, 2025)

Free

20 XP

H

10m

1.3 Foundations of AI Red Teaming

Free

200 XP

H

15m

1.4 LLM, Agent, and RAG Attack Surface

Free

200 XP

H

15m

1.5 Adversarial ML TTPs

Free

200 XP

H

15m

1.6 Building AI Red Team Operations

Free

200 XP

H

15m

1.7 Prompt and RAG Exploitation Techniques

Free

200 XP

H

30m

1.8 Automation and Evaluation

Free

200 XP

H

15m

1.9 Reporting, Metrics, and Remediation

Free

200 XP

H

15m

1.10 Course Conclusion and Next Steps

Free

20 XP

H

8m

Course Description

AI risk is not just another web app risk. Inputs are untrusted language, models can generalize and hallucinate, and retrieval chains blend internal and external data. Tool use lets models perform actions, multiplying impact. Red teams must model this system boundary, then pressure test across prompts, retrieval, tools, and model supply chain.

You will use MITRE ATLAS to structure threat modeling and attack hypotheses for AI-specific behaviors. OWASP Top 10 for LLMs provides concrete failure modes and controls to test. Findings roll up to NIST AI RMF functions for governance reporting, while CISA Secure by Design guides how teams fix root causes in code, configuration, and defaults.

The course links offense to operations. You will measure bypass rates, grounding errors, and detection quality, then hand off artifacts that SOC and platform teams can automate, monitor, and tune. Engineering receives actionable repros, guardrail configs, and remediation briefs tied to enterprise controls.

Course Objectives

By the end of this course, you will be able to:

  • Map AI attack surfaces across LLMs, agents, tools, and RAG so you can target tests that matter to the business. Why it matters: Focused testing reduces cost and increases risk coverage on critical workflows.
  • Apply MITRE ATLAS and OWASP LLM risks to plan adversarial campaigns with shared language and repeatable scope. Why it matters: Common taxonomies enable consistent testing and defensible reporting.
  • Execute AI-specific TTPs such as prompt injection, tool abuse, data poisoning, and model extraction. Why it matters: Offense validates whether existing controls prevent real loss scenarios.
  • Instrument and measure evaluations including safety, grounding, and jailbreak success. Why it matters: Quantified results drive risk decisions and model or policy changes.
  • Integrate platform guardrails from Microsoft and Google into test harnesses. Why it matters: Real platform constraints shape what is exploitable and what can be remediated quickly.
  • Report and remediate using NIST AI RMF and CISA Secure by Design alignment. Why it matters: Executives act on findings framed in accepted governance and assurance models.

Train Your Team

Cybrary’s expert-led cybersecurity courses help your team remediate skill gaps and get up-to-date on certifications. Utilize Cybrary to stay ahead of emerging threats and provide team members with clarity on how to learn, grow, and advance their careers within your organization.

Included in a Path

No items found.
No items found.

Instructors

Will McMillan
Founder & Principal IT/AI Consultant
Read Full Bio
Learn

Learn core concepts and get hands-on with key skills.

Practice

Exercise your problem-solving and creative thinking skills with security-centric puzzles

Prove

Assess your knowledge and skills to identify areas for improvement and measure your growth

Get Hands-on Learning

Put your skills to the test in virtual labs, challenges, and simulated environments.

Measure Your Progress

Track your skills development from lesson to lesson using the Cybrary Skills Tracker.

Connect with the Community

Connect with peers and mentors through our supportive community of cybersecurity professionals.

Success from Our Learners

"Cybrary really helped me get up to speed and acquire a baseline level of technical knowledge. It offers a far more comprehensive approach than just learning from a book. It actually shows you how to apply cybersecurity processes in a hands-on way"

Don Gates

Principal Systems Engineer/SAIC

"Cybrary’s SOC Analyst career path was the difference maker, and was instrumental in me landing my new job. I was able to show the employer that I had the right knowledge and the hands-on skills to execute the role."

Cory

Cybersecurity analyst/

"I was able to earn my CISSP certification within 60 days of signing up for Cybrary Insider Pro and got hired as a Security Analyst conducting security assessments and penetration testing within 120 days. This certainly wouldn’t have been possible without the support of the Cybrary mentor community."

Mike

Security Engineer and Pentester/

"Cybrary really helped me get up to speed and acquire a baseline level of technical knowledge. It offers a far more comprehensive approach than just learning from a book. It actually shows you how to apply cybersecurity processes in a hands-on way"

Don Gates

Principal Systems Engineer/SAIC

"Cybrary’s SOC Analyst career path was the difference maker, and was instrumental in me landing my new job. I was able to show the employer that I had the right knowledge and the hands-on skills to execute the role."

Cory

Cybersecurity analyst/

"I was able to earn my CISSP certification within 60 days of signing up for Cybrary Insider Pro and got hired as a Security Analyst conducting security assessments and penetration testing within 120 days. This certainly wouldn’t have been possible without the support of the Cybrary mentor community."

Mike

Security Engineer and Pentester/

"Becoming a Cybrary Insider Pro was a total game changer. Cybrary was instrumental in helping me break into cybersecurity, despite having no prior IT experience or security-related degree. Their career paths gave me clear direction, the instructors had real-world experience, and the virtual labs let me gain hands-on skills I could confidently put on my resume and speak to in interviews."

Cassandra

Information Security Analyst/Cisco Systems

"I was able to earn both my Security+ and CySA+ in two months. I give all the credit to Cybrary. I’m also proud to announce I recently accepted a job as a Cyber Systems Engineer at BDO... I always try to debunk the idea that you can't get a job without experience or a degree."

Casey

Cyber Systems Engineer/BDO

"Cybrary has helped me improve my hands-on skills and pass my toughest certification exams, enabling me to achieve 13 advanced certifications and successfully launch my own business. I love the practice tests for certification exams, especially, and appreciate the wide-ranging training options that let me find the best fit for my goals"

Angel

Founder,/ IntellChromatics.

AI for red teams

The course links offense to operations. AI risk is not just another web app risk. Models can generalize and hallucinate, retrieval chains blend internal and external data, and tool use lets models perform actions, multiplying impact. Red teams must model this system boundary, then pressure test across prompts, retrieval, tools, and supply chains.

2
23
M
Time
Intermediate
difficulty
2
ceu/cpe

Course Content

Course Description

AI risk is not just another web app risk. Inputs are untrusted language, models can generalize and hallucinate, and retrieval chains blend internal and external data. Tool use lets models perform actions, multiplying impact. Red teams must model this system boundary, then pressure test across prompts, retrieval, tools, and model supply chain.

You will use MITRE ATLAS to structure threat modeling and attack hypotheses for AI-specific behaviors. OWASP Top 10 for LLMs provides concrete failure modes and controls to test. Findings roll up to NIST AI RMF functions for governance reporting, while CISA Secure by Design guides how teams fix root causes in code, configuration, and defaults.

The course links offense to operations. You will measure bypass rates, grounding errors, and detection quality, then hand off artifacts that SOC and platform teams can automate, monitor, and tune. Engineering receives actionable repros, guardrail configs, and remediation briefs tied to enterprise controls.

Course Objectives

By the end of this course, you will be able to:

  • Map AI attack surfaces across LLMs, agents, tools, and RAG so you can target tests that matter to the business. Why it matters: Focused testing reduces cost and increases risk coverage on critical workflows.
  • Apply MITRE ATLAS and OWASP LLM risks to plan adversarial campaigns with shared language and repeatable scope. Why it matters: Common taxonomies enable consistent testing and defensible reporting.
  • Execute AI-specific TTPs such as prompt injection, tool abuse, data poisoning, and model extraction. Why it matters: Offense validates whether existing controls prevent real loss scenarios.
  • Instrument and measure evaluations including safety, grounding, and jailbreak success. Why it matters: Quantified results drive risk decisions and model or policy changes.
  • Integrate platform guardrails from Microsoft and Google into test harnesses. Why it matters: Real platform constraints shape what is exploitable and what can be remediated quickly.
  • Report and remediate using NIST AI RMF and CISA Secure by Design alignment. Why it matters: Executives act on findings framed in accepted governance and assurance models.
This course is part of a Career Path:
No items found.

Instructed by

Provider
Cybrary Logo
Certification Body
Certificate of Completion

Complete this entire course to earn a AI for red teams Certificate of Completion