🚀 Early Adopter Price: $39/mo for lifeClaim Your Price →
Generative AI Fundamentals
Coming Soon
Expected availability announced soon

This course is in active development. Preview the scope below and create a free account to be notified the moment it goes live.

Notify me
ISACA CertificatesAssociateComing Soon

Generative AI Fundamentals

The ISACA AI Fundamentals Certificate covers the foundational concepts of artificial intelligence as they apply to enterprise risk, audit, security, and governance. It is the entry-level cousin of the AAIA / AAIR / AAISM advanced certificates and validates understanding of AI systems, lifecycle, common risks, and governance concepts at a conceptual depth.

Who Should Take This

Auditors, risk professionals, IT generalists, and business leaders who need a working vocabulary for AI without becoming practitioners. Assumes basic technology literacy. Learners finish able to discuss AI systems with practitioners, recognize common risks, and identify the governance levers an organization can apply to AI deployments.

What's Included in AccelaStudy® AI

Adaptive Knowledge Graph
Practice Questions
Lesson Modules
Console Simulator Labs
Exam Tips & Strategy
13 Activity Formats

Course Outline

1AI Fundamentals
3 topics

AI, ML, and Deep Learning

  • Distinguish AI, machine learning, deep learning, and generative AI as nested concepts of decreasing scope.
  • Identify supervised, unsupervised, and reinforcement learning and identify a representative use case for each.

Foundation Models and LLMs

  • Define a foundation model and identify how it differs from task-specific models trained from scratch.
  • Identify large language models (LLMs) as the dominant foundation-model family and identify common examples (Claude, GPT, Gemini, Llama).
  • Apply foundation-model selection criteria for a use case: capability fit, latency, cost, data residency, and licensing.

Generative AI Capabilities

  • Identify the common generative-AI capabilities: text generation, summarization, classification, translation, code generation, image generation, multimodal.
  • Identify retrieval-augmented generation (RAG) and fine-tuning as the two dominant patterns for grounding generative AI in enterprise knowledge.
2AI Lifecycle
3 topics

Data and Training

  • Identify the typical AI lifecycle stages: problem framing, data acquisition, preprocessing, training, evaluation, deployment, monitoring.
  • Identify training data quality issues: bias, drift, label noise, leakage, sampling bias, and personal-data inclusion.

Evaluation and Validation

  • Identify common evaluation metrics: accuracy, precision/recall/F1 for classification, perplexity / BLEU / ROUGE for generation, win-rate for RLHF.
  • Apply red-team evaluation to a generative AI deployment, identifying jailbreaks, prompt injections, and policy violations.
  • Analyze an evaluation report that shows high accuracy on test data but poor real-world performance and identify likely root causes (distribution shift, leakage).

Deployment and Monitoring

  • Identify deployment patterns: API-only, on-device, edge, hybrid, and the trade-offs of each.
  • Apply post-deployment monitoring: drift detection, output sampling, abuse detection, cost tracking.
3AI Risks
4 topics

Bias and Fairness

  • Define algorithmic bias and identify the typical sources: training-data bias, label bias, sample bias, deployment context bias.
  • Apply fairness assessment to a hiring scenario where the AI screens resumes and identify protected-attribute concerns.

Hallucination and Reliability

  • Define hallucination and identify common mitigation patterns: RAG with citations, structured output validation, human-in-the-loop review.
  • Apply hallucination-detection patterns: groundedness checking against retrieval sources, citation-link validation, scope guards.

Privacy and Data Leakage

  • Identify training-data memorization and inference-time prompt leakage as two distinct privacy concerns for generative AI.
  • Apply privacy controls: data minimization in training, prompt filtering, output sanitization, and contractual terms with model providers (no-training-on-customer-data).

Adversarial and Security Threats

  • Identify the major AI adversarial threats: prompt injection, jailbreaking, model extraction, data poisoning, adversarial inputs, and tool abuse in agentic systems.
  • Apply MITRE ATLAS as a knowledge base of adversarial-ML tactics and techniques and identify how it complements ATT&CK.
  • Analyze a 'prompt injection via retrieved document' scenario and trace the controls (retrieval allowlist, prompt boundaries, output handling) that should prevent it.
4AI Governance
4 topics

Frameworks and Standards

  • Identify NIST AI Risk Management Framework (AI RMF) and identify its four core functions: Govern, Map, Measure, Manage.
  • Identify ISO/IEC 42001 as the AI management system standard and identify its relationship to ISO 27001.

Regulation and the EU AI Act

  • Identify the EU AI Act risk tiers (unacceptable, high, limited, minimal) and identify what kinds of systems fall into each.
  • Apply EU AI Act risk-tier classification to a sample system (resume screener, customer service chatbot, social-scoring system) and identify the obligations.
  • Analyze the difference between EU AI Act, US Executive Order on AI, and emerging state-level AI laws and identify the common compliance themes.

Responsible AI Principles

  • Identify the canonical responsible-AI principles: fairness, transparency, accountability, privacy, security, safety, human oversight.
  • Apply responsible-AI principles to a sample AI deployment proposal and identify which principles are at risk.

Human Oversight

  • Distinguish human-in-the-loop, human-on-the-loop, and human-out-of-the-loop and identify a representative use case for each.
  • Apply human-oversight selection for systems of increasing autonomy: classification, recommendation, decision-support, autonomous action.
5AI in the Enterprise
4 topics

Build vs Buy

  • Identify the build-vs-buy spectrum: foundation-model API, hosted service, fine-tuned model, custom-trained model.
  • Apply the build-vs-buy decision for a customer service automation use case and identify the cost, risk, and capability trade-offs.

Third-Party Model Risk

  • Identify the unique vendor-risk concerns for third-party AI: model provenance, training-data lineage, no-training-on-customer-data clauses, output liability.
  • Apply vendor-risk-assessment questions specific to AI providers and identify the highest-priority due-diligence items.

Use Case Patterns

  • Identify common enterprise AI use-case patterns: classification, summarization, generation, search, agentic automation, decision support.
  • Apply pattern selection for a sample business problem (e.g., support-ticket triage) and identify the appropriate pattern, evaluation criteria, and risks.

Cost and Sustainability

  • Identify the major cost drivers for AI deployments: inference cost, training cost, data infrastructure, observability, compliance.
  • Apply FinOps-style cost-control patterns to an LLM-based product: caching, response-length controls, model tiering, rate limiting.
6Career and Continuous Learning
3 topics

AI Roles and Skills

  • Identify common AI-related roles: ML engineer, data scientist, MLOps engineer, AI product manager, AI auditor, AI risk officer, AI safety researcher.
  • Identify the ISACA advanced AI certificate track (AAIA Audit, AAIR Risk, AAISM Security Management) as the natural progression from AI Fundamentals.

Communicating About AI

  • Apply audience-aware communication: explain a generative AI system's capability and risk to a non-technical executive, a regulator, and a security engineer.

Staying Current

  • Identify the rapid pace of AI evolution and identify reliable continuous-learning sources: arXiv, model-card releases, NIST publications, ISACA AI Hub, sober vendor blogs.
  • Analyze hype-cycle artifacts (vendor claims, leaderboards, benchmarks) and propose a critical-evaluation routine that resists overstated claims.
7AI in Practice
6 topics

Agentic AI

  • Define an AI agent and identify the components: model, tools, memory, planner, observability.
  • Identify common agentic-AI risks: tool abuse, infinite loops, prompt injection via retrieved content, unintended privileged action.
  • Apply agent design boundaries: read-only by default, explicit allowlist for write actions, human-in-the-loop for high-impact actions.

Retrieval-Augmented Generation (RAG)

  • Define RAG and identify its components: corpus, chunker, embedder, vector store, retriever, generator, citation surfacer.
  • Apply RAG-quality evaluation: retrieval precision/recall, groundedness, citation accuracy, freshness.
  • Analyze a RAG deployment that confidently cites incorrect documents and identify root causes (chunking errors, semantic mismatch, retrieval drift).

Fine-Tuning vs Prompting

  • Distinguish prompt engineering, in-context learning, RAG, and fine-tuning as four distinct approaches to specialize a foundation model.
  • Apply selection guidance: RAG for fresh knowledge, fine-tuning for domain style, prompting for behavior steering, custom training for novel capability.

Observability for AI

  • Identify the AI-specific observability signals: prompt/response logs, retrieval logs, tool-call logs, latency and cost per call, hallucination rate, user-feedback rate.
  • Apply LLM-observability tooling (LangSmith, Helicone, Arize, custom) to a deployed agent and identify the must-capture spans.

Cost Optimization

  • Identify the major LLM-cost levers: model tier (Haiku vs Sonnet vs Opus), prompt caching, response-length limits, batching, model routing.
  • Apply prompt caching to a high-volume RAG application and analyze the typical cost reduction from cache hits on a stable system prompt.
  • Analyze a cost overrun where an agent's runaway tool calls drove a 10x bill and propose detective and preventive controls.

Open Source vs Proprietary

  • Identify the trade-offs between open-source models (Llama, Qwen, DeepSeek) and proprietary models (Claude, GPT, Gemini): data residency, cost predictability, performance per dollar, customization depth.
  • Apply open-source vs proprietary selection for a workload that requires complete data control and identify the operational obligations of self-hosting (infra, security patching, evaluation, monitoring).

Scope

Included Topics

  • AI fundamentals: ML, deep learning, generative AI, foundation models, LLMs.
  • AI lifecycle: data acquisition, model training, evaluation, deployment, monitoring.
  • Common AI risks: bias, hallucination, privacy leakage, prompt injection, data poisoning.
  • AI governance frameworks: NIST AI RMF, ISO/IEC 42001, EU AI Act overview.
  • AI in enterprise: representative use cases, build-vs-buy, third-party model risk.
  • Responsible AI principles: fairness, transparency, accountability, privacy, safety.
  • Human oversight: human-in-the-loop, human-on-the-loop, autonomy levels.
  • AI security at conceptual depth: model theft, adversarial inputs, data poisoning.
  • Career and certification pathway from AI Fundamentals to AAIA / AAIR / AAISM advanced track.

Not Covered

  • Hands-on ML training, fine-tuning, or deployment.
  • Mathematical depth on neural networks beyond intuitive description.

Generative AI Fundamentals is coming soon

Adaptive learning that maps your knowledge and closes your gaps.

Create Free Account to Be Notified