LLM application security Architecture and Guardrails
RCCE students will learn AI threat modeling, prompt injection defenses, model security, AI data protection, and responsible AI deployment. RCCE students will learn to secure AI systems throughout their lifecycle, protect training data and model integrity, detect adversarial attacks against machine learning systems, and establish governance frameworks for safe AI operations. This architecture course teaches secure system design using proven patterns, guardrails, and reference architectures. At an expert level, RCCE students will learn to evaluate design options against security requirements, make informed trade-off decisions, and build systems that are resilient by design. Students gain the architectural thinking skills needed for security engineering and solution design roles.
- Security Engineers building defensive controls
- Security Analysts and Blue Team members
- Systems Administrators with security responsibilities
- GRC and Risk Professionals supporting controls
- Professionals implementing LLM application security Architecture and Guardrails
- Design a scalable privilege management architecture with policy and enforcement
- Design a scalable privilege management architecture with policy and enforcement, including Guardrail Engineering.
- Explain LLM Application Architecture Overview fundamentals
- Execute hands-on tasks for input attack surface — covering Direct prompt injection.
- Execute hands-on tasks for output attack surface — covering Sensitive data leakage.
- Design a scalable privilege management architecture with policy and enforcement, including Training data poisoning.
- Execute hands-on tasks for infrastructure surface — covering API key compromise, Supply chain attacks, Side-channel inference.
- Execute hands-on tasks for llm threat landscape: attack taxonomy
- Execute hands-on tasks for prompt-level attacks
- Design a scalable privilege management architecture with policy and enforcement, including Data-Level Attacks.
| Module 01 | Architecture and Guardrails |
| Module 02 | AI Threat Modeling |
| Module 03 | Secure Architecture |
| Module 04 | LLM Application Architecture Overview |
| Module 05 | Input Attack Surface |
| Module 06 | Output Attack Surface |
| Module 07 | Model Attack Surface |
| Module 08 | Infrastructure Surface |
| Module 09 | AI Threat Modeling: STRIDE for LLMs |
| Module 10 | LLM Threat Landscape: Attack Taxonomy |
| Module 11 | Prompt-Level Attacks |
| Module 12 | Model-Level Attacks |
| Module 13 | Attack Pattern 1: Direct Prompt Injection |
| Module 14 | Attack Vector |
All hands-on labs run on Rocheston Rose X OS. Students practice llm application security architecture and guardrails by implementing the controls discussed in class, with a focus on real-world deployment, monitoring, and validation.
- Lab 1: Design a scalable privilege management architecture with policy and enforcement
- Lab 2: Design a scalable privilege management architecture with policy and enforcement
- Lab 3: Design a scalable privilege management architecture with policy and enforcement
- Lab 4: Explain LLM Application Architecture Overview fundamentals
- Lab 5: Execute hands-on tasks for input attack surface
Upon successful completion of this course, students will receive an official RCCE Course Completion Certificate for LLM application security Architecture and Guardrails, verifiable through the Rocheston certification portal.
- Full access to all course materials and slide decks
- Hands-on lab access on Rocheston Rose X OS environment
- Access to Rocheston CyberNotes
- Access to Rocheston Zelfire — EDR/XDR SIEM platform
- Access to Rocheston Raven — online cyber range exercise platform
- Access to Rocheston Vulnerability Vines AI