🚀 Launch Special: $29/mo for life --d --h --m --s Claim Your Price →
GOAA
Coming Soon
Expected availability announced soon

This course is in active development. Preview the scope below and create a free account to be notified the moment it goes live.

Notify me
GOAA GIAC Certifications Coming Soon

GOAA

GIAC Offensive AI Analyst (GOAA) teaches security professionals to apply AI/ML fundamentals, adversarial machine learning, and LLM attack techniques for offensive operations, enabling realistic threat modeling and mitigation of AI‑driven attacks.

120
Minutes
56
Questions
67/100
Passing Score
$979
Exam Cost

Who Should Take This

Penetration testers, red team members, and security analysts with solid offensive experience should enroll to deepen their understanding of AI‑enabled attack vectors. They aim to integrate adversarial ML, synthetic media manipulation, and AI‑driven reconnaissance into existing threat‑hunting workflows, ensuring they can both simulate and defend against emerging AI‑based threats.

What's Covered

1 AI/ML Fundamentals for Offensive Security
2 Adversarial Machine Learning
3 LLM Attack Techniques
4 AI-Powered Reconnaissance and OSINT
5 Deepfakes and Synthetic Media for Social Engineering
6 AI-Enhanced Exploitation Tools

What's Included in AccelaStudy® AI

Adaptive Knowledge Graph
Practice Questions
Lesson Modules
Console Simulator Labs
Exam Tips & Strategy
20 Activity Formats

Course Outline

61 learning goals
1 AI/ML Fundamentals for Offensive Security
3 topics

Machine learning paradigms and model architectures

  • Identify supervised, unsupervised, and reinforcement learning paradigms and describe how each creates distinct attack surfaces in classification, clustering, and sequential decision-making systems.
  • Describe neural network architectures including CNNs, RNNs, LSTMs, and transformers and explain how their structural properties create exploitable weaknesses in input processing, attention mechanisms, and output generation.
  • Identify the components of a model training pipeline including data collection, preprocessing, feature engineering, training, validation, and deployment that represent potential attack insertion points.
  • Implement a model analysis workflow that extracts architecture metadata, identifies framework versions, maps API endpoints, and enumerates model capabilities to characterize the target attack surface.

AI threat landscape and attack taxonomy

  • Describe the MITRE ATLAS framework and identify how adversarial ML tactics, techniques, and procedures map to real-world attack scenarios across the AI/ML system lifecycle.
  • Identify the OWASP Top 10 for LLM Applications including prompt injection, insecure output handling, training data poisoning, and supply chain vulnerabilities with their risk ratings and attack vectors.
  • Analyze the AI threat landscape by categorizing attack motivations, threat actor capabilities, target system types, and attack complexity levels to assess the risk profile of deployed AI systems.

AI model supply chain attack surface

  • Identify AI supply chain attack vectors including compromised model repositories, malicious model weights, backdoored pre-trained models, and dependency hijacking in ML framework packages.
  • Implement supply chain reconnaissance against ML model registries by enumerating public model repositories, identifying unprotected model endpoints, and mapping framework dependency trees for exploitation.
  • Analyze the exploitability of AI supply chain weaknesses by evaluating model provenance gaps, serialization format vulnerabilities, and trust boundary violations in model deployment pipelines.
2 Adversarial Machine Learning
3 topics

Evasion attacks and adversarial examples

  • Describe evasion attack methodologies including FGSM, PGD, C&W, and DeepFool and explain how gradient-based perturbations cause misclassification in image, text, and audio classifiers.
  • Implement adversarial example generation using gradient-based methods to craft perturbations that evade image classifiers, malware detectors, and network intrusion detection systems.
  • Apply black-box evasion techniques including transfer attacks, query-based attacks, and decision boundary estimation to bypass models where internal parameters and gradients are not directly accessible.
  • Analyze evasion attack effectiveness by measuring perturbation magnitude, transferability across models, detection resistance, and real-world deployment constraints to assess operational viability.

Data poisoning and backdoor attacks

  • Describe data poisoning attack types including label flipping, clean-label attacks, backdoor injection, and training data manipulation and explain their impact on model integrity and reliability.
  • Implement backdoor injection attacks that embed hidden triggers into training datasets to cause targeted misclassification when specific patterns are present in inference inputs.
  • Apply supply chain poisoning techniques that compromise publicly available datasets, pre-trained models, and model registries to inject persistent backdoors into downstream AI applications.
  • Evaluate poisoning attack stealth and persistence by analyzing trigger specificity, model accuracy preservation, detection resistance against spectral signatures, and longevity across model updates.

Model extraction and inversion

  • Identify model extraction attack vectors including query-based model stealing, side-channel extraction, and API abuse techniques used to replicate proprietary model functionality.
  • Implement model extraction attacks using systematic query strategies to approximate target model decision boundaries and train surrogate models with comparable classification accuracy.
  • Apply model inversion and membership inference techniques to extract training data characteristics, determine whether specific records were used in training, and recover sensitive attributes from model outputs.
  • Analyze extraction attack fidelity by comparing surrogate model accuracy, query budget requirements, detection likelihood, and intellectual property exposure risk across different model architectures.
3 LLM Attack Techniques
4 topics

Prompt injection and jailbreaking

  • Describe direct and indirect prompt injection attack vectors and explain how user input manipulation, context window poisoning, and hidden instruction embedding bypass LLM safety guardrails.
  • Implement prompt injection attacks including role-playing exploits, payload splitting, encoding-based bypasses, and multi-turn escalation to extract restricted information or override system instructions.
  • Apply jailbreaking techniques including DAN prompts, hypothetical scenario framing, language translation exploits, and token smuggling to defeat content moderation and output filtering mechanisms.
  • Evaluate prompt injection resilience by testing multiple attack vectors, measuring bypass success rates, assessing guardrail robustness, and identifying systemic weaknesses in LLM deployment architectures.

LLM data exfiltration and system prompt extraction

  • Identify system prompt extraction techniques including instruction leaking, reflective prompting, and boundary testing that reveal confidential system instructions and business logic embedded in LLM applications.
  • Implement training data extraction attacks using divergence prompting, memorization exploitation, and verbatim reproduction techniques to recover sensitive data ingested during LLM training.
  • Analyze data leakage risks in RAG-enabled applications by evaluating retrieval mechanism vulnerabilities, context window exposure, and cross-tenant data isolation failures in multi-user LLM deployments.

LLM agent and tool-use exploitation

  • Describe LLM agent attack surfaces including tool-calling injection, function argument manipulation, plugin chain exploitation, and autonomous action hijacking in agentic AI systems.
  • Implement attacks against LLM agent systems by crafting malicious tool inputs, manipulating chain-of-thought reasoning, and exploiting permission boundaries to achieve unauthorized actions.
  • Evaluate LLM agent security by analyzing tool permission models, output validation mechanisms, action approval workflows, and sandboxing implementations to identify exploitation paths.

LLM fine-tuning and alignment exploitation

  • Describe fine-tuning attack surfaces including RLHF reward hacking, alignment bypass through adversarial fine-tuning datasets, and safety guardrail degradation via targeted training data manipulation.
  • Implement fine-tuning exploitation techniques that inject malicious behaviors through poisoned training examples while preserving model performance on benign evaluation benchmarks.
  • Evaluate fine-tuning attack persistence by measuring behavior retention across model updates, safety evaluation evasion rates, and detectability under alignment auditing procedures.
4 AI-Powered Reconnaissance and OSINT
3 topics

Automated OSINT and social profiling

  • Describe how AI enhances OSINT collection through automated social media scraping, natural language processing of public documents, entity extraction, and relationship graph construction from open sources.
  • Implement AI-powered reconnaissance workflows that use LLMs for target profiling, NLP for credential pattern detection in paste sites, and classification models for organizational structure mapping.
  • Analyze the intelligence value of AI-gathered OSINT by assessing source reliability, information accuracy, relevance to engagement objectives, and operational security implications of collection methods.

AI-assisted vulnerability discovery

  • Identify AI-assisted vulnerability discovery methods including ML-guided fuzzing, code pattern recognition, automated vulnerability prediction, and intelligent attack surface enumeration.
  • Apply AI-enhanced fuzzing techniques using coverage-guided mutation strategies, reinforcement learning for input generation, and neural network-based crash triage to accelerate vulnerability discovery.
  • Evaluate AI-assisted vulnerability discovery effectiveness by comparing coverage depth, unique bug discovery rates, false positive rates, and resource efficiency against traditional fuzzing and scanning approaches.

AI-powered credential and password attacks

  • Describe how neural network models generate context-aware password guesses using leaked credential corpuses, Markov chains, and generative adversarial networks to surpass traditional dictionary attacks.
  • Implement AI-assisted credential stuffing workflows that combine leaked credential databases with ML-driven password mutation, target prioritization, and rate limit evasion strategies.
5 Deepfakes and Synthetic Media for Social Engineering
3 topics

Deepfake generation techniques

  • Describe deepfake generation architectures including GANs, autoencoders, diffusion models, and neural voice synthesis and explain how they produce realistic face swaps, voice clones, and video manipulations.
  • Implement voice cloning attacks using text-to-speech synthesis with speaker embedding extraction to generate convincing impersonation audio for vishing and business email compromise campaigns.
  • Apply real-time face-swap and video manipulation tools to create convincing deepfake video calls for social engineering scenarios targeting executive impersonation and identity verification bypass.
  • Evaluate deepfake quality and detection evasion by assessing visual artifacts, audio naturalness, temporal consistency, and resistance to forensic detection tools and human perceptual analysis.

Synthetic content for social engineering

  • Describe how LLM-generated text enables scalable spear phishing, pretexting, and credential harvesting campaigns with personalized content that bypasses traditional email security filters.
  • Implement AI-generated phishing campaigns using LLMs for contextually relevant lure creation, target-specific personalization, and multi-language content generation at scale.
  • Analyze the effectiveness of AI-enhanced social engineering by comparing click rates, credential capture rates, and detection avoidance against manually crafted campaigns to quantify the AI advantage.

Detection evasion for synthetic content

  • Identify deepfake detection methods including frequency analysis, facial landmark inconsistency detection, audio spectral analysis, and neural network-based classifiers used by defensive systems.
  • Apply anti-forensic techniques to synthetic media including compression artifact manipulation, temporal smoothing, and adversarial perturbation injection to evade automated deepfake detection systems.
6 AI-Enhanced Exploitation Tools
3 topics

AI-assisted payload generation and adaptation

  • Identify AI-powered exploitation tools including LLM-assisted code generation for exploit development, neural network-based payload obfuscation, and ML-guided C2 communication evasion techniques.
  • Implement AI-assisted payload generation using LLMs for polymorphic code creation, obfuscation technique selection, and automated adaptation to evade signature-based detection systems.
  • Apply ML-based evasion techniques to adapt malware behavior in response to sandbox environments, EDR heuristics, and behavioral analysis engines using reinforcement learning and environment fingerprinting.
  • Evaluate AI-enhanced exploitation tool effectiveness by comparing detection evasion rates, payload delivery success, operational flexibility, and attribution resistance against conventional exploitation methods.

AI-driven lateral movement and automation

  • Describe how reinforcement learning and graph neural networks optimize lateral movement paths, privilege escalation decisions, and persistence mechanism selection during post-exploitation operations.
  • Implement AI-automated post-exploitation workflows that use network topology analysis, credential graph traversal, and adaptive decision-making to autonomously expand access within compromised environments.
  • Analyze the operational trade-offs of AI-automated versus human-directed post-exploitation including speed, stealth, adaptability, error rates, and the risk of unintended collateral impact during engagements.

Ethical and legal considerations

  • Identify legal frameworks and ethical boundaries governing offensive AI research including rules of engagement, responsible disclosure, dual-use technology restrictions, and authorized testing scope limitations.
  • Evaluate the ethical implications of AI-enhanced offensive capabilities by assessing proportionality, potential for misuse, dual-use concerns, and the balance between security research advancement and harm prevention.

Scope

Included Topics

  • All domains covered by the GIAC Offensive AI Analyst certification (GOAA) aligned with SANS SEC535: AI/ML Fundamentals for Offensive Security, Adversarial Machine Learning (evasion, poisoning, extraction), LLM Attack Techniques (prompt injection, jailbreaking), AI-Powered Reconnaissance and OSINT, Deepfakes and Synthetic Media for Social Engineering, and AI-Enhanced Exploitation Tools.
  • Machine learning foundations relevant to offensive security: supervised and unsupervised learning, neural network architectures (CNNs, RNNs, transformers), model training pipelines, feature engineering, and inference mechanisms that attackers exploit.
  • Adversarial machine learning attack taxonomy: evasion attacks (adversarial examples, perturbation techniques), data poisoning (backdoor injection, label flipping), model extraction (query-based stealing, side-channel extraction), and model inversion (membership inference, attribute inference).
  • Large language model attack techniques: direct and indirect prompt injection, jailbreaking methods, system prompt extraction, training data extraction, fine-tuning exploitation, and retrieval-augmented generation manipulation.
  • AI-powered offensive reconnaissance: automated OSINT collection, social media profiling, network mapping with ML, automated vulnerability discovery, and intelligent fuzzing with reinforcement learning.
  • Deepfake and synthetic media threats: face synthesis and swapping, voice cloning, video manipulation, synthetic text generation for phishing, and detection evasion techniques for synthetic content.
  • AI-enhanced exploitation: AI-assisted payload generation, automated exploit adaptation, machine learning for lateral movement optimization, and AI-driven social engineering campaigns.

Not Covered

  • Defensive AI and ML model hardening techniques that are the focus of blue team certifications rather than offensive AI analysis.
  • Advanced mathematical proofs and theoretical machine learning research that exceeds the practical offensive security application scope of SEC535.
  • Enterprise AI governance, ethics frameworks, and responsible AI policies outside the attacker-centric perspective of this certification.
  • Traditional penetration testing techniques without AI/ML integration that are covered by other GIAC offensive certifications like GPEN.
  • Production ML engineering, MLOps pipelines, and model deployment infrastructure outside the context of attack surface analysis.

Official Exam Page

Learn more at GIAC Certifications

Visit

GOAA is coming soon

Adaptive learning that maps your knowledge and closes your gaps.

Create Free Account to Be Notified

Trademark Notice

GIAC® is a registered trademark of Global Information Assurance Certification (a subsidiary of the SANS Institute). GIAC does not endorse this product.

AccelaStudy® and Renkara® are registered trademarks of Renkara Media Group, Inc. All third-party marks are the property of their respective owners and are used for nominative identification only.