Practical LLM application security Workshop
RCCE students will learn AI threat modeling, prompt injection defenses, model security, AI data protection, and responsible AI deployment. RCCE students will learn to secure AI systems throughout their lifecycle, protect training data and model integrity, detect adversarial attacks against machine learning systems, and establish governance frameworks for safe AI operations. This practice-intensive course emphasizes applied skills through lab exercises, real-world scenarios, and production-realistic workflows. Building on core knowledge, RCCE students will learn by doing, building muscle memory and practical confidence through repeated hands-on engagement. Students complete exercises that mirror actual workplace tasks, ensuring skills transfer directly to their professional roles.
- Security Engineers building defensive controls
- Security Analysts and Blue Team members
- Systems Administrators with security responsibilities
- GRC and Risk Professionals supporting controls
- Professionals implementing Practical LLM application security Workshop
- Execute hands-on tasks for security workshop
- Design a scalable privilege management architecture with policy and enforcement, including attack surfaces specific to LLM-powered.
- Execute hands-on tasks for defend against prompt injection — covering layered defenses for direct and indirect, Protect training data, RAG corpora, and model.
- Execute hands-on tasks for secure ai data pipelines — covering Protect training data, RAG corpora, and model.
- Execute hands-on tasks for detect adversarial attacks — covering for evasion, poisoning, and model.
- Execute hands-on tasks for establish ai governance — covering Build responsible deployment frameworks and.
- Execute hands-on tasks for respond to ai incidents — covering Execute IR playbooks for LLM-specific breach.
- Explain LLM Architecture Overview fundamentals
- Execute hands-on tasks for user input
- Execute hands-on tasks for trust boundaries — covering User input boundary (injection vector).
- Execute hands-on tasks for prompt injection
- Execute hands-on tasks for sensitive info disclosure
| Module 01 | Security Workshop |
| Module 02 | Threat Model LLM Applications |
| Module 03 | Defend Against Prompt Injection |
| Module 04 | Secure AI Data Pipelines |
| Module 05 | Detect Adversarial Attacks |
| Module 06 | Establish AI Governance |
| Module 07 | Respond to AI Incidents |
| Module 08 | LLM Architecture Overview |
| Module 09 | User Input |
| Module 10 | Trust Boundaries |
| Module 11 | Prompt Injection |
| Module 12 | Sensitive Info Disclosure |
| Module 13 | Insecure Output Handling |
| Module 14 | Insecure Plugin Design |
All hands-on labs run on Rocheston Rose X OS. Students practice practical llm application security workshop by implementing the controls discussed in class, with a focus on real-world deployment, monitoring, and validation.
- Lab 1: Execute hands-on tasks for security workshop
- Lab 2: Design a scalable privilege management architecture with policy and enforcement
- Lab 3: Execute hands-on tasks for defend against prompt injection
- Lab 4: Execute hands-on tasks for secure ai data pipelines
- Lab 5: Execute hands-on tasks for detect adversarial attacks
Upon successful completion of this course, students will receive an official RCCE Course Completion Certificate for Practical LLM application security Workshop, verifiable through the Rocheston certification portal.
- Full access to all course materials and slide decks
- Hands-on lab access on Rocheston Rose X OS environment
- Access to Rocheston CyberNotes
- Access to Rocheston Zelfire — EDR/XDR SIEM platform
- Access to Rocheston Raven — online cyber range exercise platform
- Access to Rocheston Vulnerability Vines AI