🚀 Launch Special: $29/mo for life --d --h --m --s Claim Your Price →
AAIA
Coming Soon
Expected availability announced soon

This course is in active development. Preview the scope below and create a free account to be notified the moment it goes live.

Notify me
AAIA ISACA Coming Soon

AAIA

The AAIA certification teaches experienced CISA‑qualified auditors how to assess AI governance, manage AI‑related risk, and apply advanced auditing techniques to AI systems, ensuring compliance and strategic value.

150
Minutes
90
Questions
450/800
Passing Score
${'member': 459, 'non_member': 599}
Exam Cost

Who Should Take This

CISA‑certified IT auditors who have led enterprise‑wide control assessments and now focus on AI initiatives are ideal candidates. They seek to deepen expertise in AI governance, risk mitigation, and audit toolsets to guide organizations in trustworthy AI deployment and to align AI projects with regulatory frameworks.

What's Covered

1 All domains and objectives in the ISACA Advanced in AI Audit (AAIA) exam: Domain 1 AI Governance and Risk
2 , Domain 2 AI Operations
3 , and Domain 3 AI Auditing Tools and Techniques

What's Included in AccelaStudy® AI

Adaptive Knowledge Graph
Practice Questions
Lesson Modules
Console Simulator Labs
Exam Tips & Strategy
20 Activity Formats

Course Outline

33 learning goals
1 Domain 1: AI Governance and Risk
5 topics

AI models and considerations

  • Analyze AI model types including supervised learning, unsupervised learning, reinforcement learning, and generative AI to determine their risk profiles and audit implications.
  • Evaluate organizational AI readiness by assessing data maturity, technical infrastructure, talent availability, and cultural preparedness for AI adoption.
  • Apply AI use case evaluation criteria to advise stakeholders on selecting appropriate AI approaches based on data availability, complexity, and business requirements.

AI governance and program management

  • Evaluate AI governance structures including AI oversight committees, model risk management functions, and accountability frameworks for AI-driven decisions.
  • Apply AI policy development frameworks to assess organizational AI usage policies, acceptable use guidelines, and model deployment approval processes.
  • Design AI governance audit programs that comprehensively assess governance maturity, policy effectiveness, and stakeholder engagement across the AI lifecycle.

AI risk management

  • Apply AI risk frameworks including NIST AI RMF to identify and categorize AI-specific risks such as bias, drift, adversarial attacks, and hallucination.
  • Evaluate AI model risk assessment processes to determine adequacy of risk identification, impact analysis, and risk treatment for deployed AI systems.
  • Analyze AI supply chain risks including third-party model dependencies, training data provenance, and pre-trained model vulnerabilities for audit planning.

Privacy and data governance for AI

  • Evaluate data governance programs for AI including training data quality, data lineage, consent management, and data bias assessment mechanisms.
  • Apply privacy impact assessment techniques to evaluate AI systems that process personal data including automated decision-making and profiling activities.
  • Recommend data governance improvements for AI systems based on audit findings related to data quality, completeness, representativeness, and privacy compliance.

AI ethics, regulations, and standards

  • Analyze AI regulatory requirements including the EU AI Act, sector-specific AI regulations, and emerging AI governance standards to assess organizational compliance.
  • Evaluate responsible AI practices including fairness, transparency, accountability, and human oversight mechanisms against organizational AI ethics commitments.
  • Apply ethical AI assessment frameworks to audit AI systems for potential harm, discrimination, and unintended consequences across diverse populations.
2 Domain 2: AI Operations
4 topics

AI model lifecycle management

  • Evaluate AI model development processes including data preparation, feature engineering, model training, validation, and testing for control adequacy.
  • Apply model validation techniques to assess model performance, generalizability, robustness, and fitness for intended use before deployment approval.
  • Evaluate model version control, experiment tracking, and model registry practices to assess reproducibility and auditability of AI development processes.

AI deployment and monitoring

  • Evaluate AI deployment governance including approval workflows, staged rollout procedures, rollback mechanisms, and production readiness assessments.
  • Apply operational monitoring techniques to assess model drift detection, performance degradation alerts, and automated retraining triggers in production AI systems.
  • Analyze AI system incidents and failures to evaluate root cause analysis processes, impact assessment, and corrective action effectiveness.

AI bias and explainability

  • Apply bias detection methodologies to evaluate AI models for discriminatory outcomes across protected characteristics including race, gender, age, and disability.
  • Evaluate AI explainability mechanisms including SHAP values, LIME, attention visualization, and model cards to assess transparency and interpretability of AI decisions.
  • Recommend bias mitigation strategies including pre-processing, in-processing, and post-processing techniques based on audit findings and organizational risk tolerance.

AI organizational readiness

  • Evaluate organizational change management for AI adoption including training programs, role transitions, and cultural readiness assessments.
  • Apply AI maturity models to assess organizational capabilities in data management, model development, deployment, and governance relative to industry benchmarks.
  • Design AI operational readiness audit programs that assess infrastructure, talent, processes, and governance mechanisms for successful AI deployment.
3 Domain 3: AI Auditing Tools and Techniques
2 topics

AI audit planning and methodology

  • Design AI-specific audit plans that address model risk, data quality, algorithmic fairness, and regulatory compliance within the AI system lifecycle.
  • Apply AI audit scoping techniques to identify audit boundaries, determine testing approaches, and allocate resources for AI system assessments.
  • Evaluate AI testing and sampling methodologies to select appropriate techniques for validating model outputs, training data quality, and control effectiveness.

AI audit evidence and reporting

  • Apply evidence collection techniques specific to AI systems including model documentation review, output testing, data lineage verification, and algorithm inspection.
  • Apply AI-enabled audit tools including data analytics, anomaly detection, and automated testing to enhance audit efficiency and coverage.
  • Design AI audit reports that communicate technical findings, risk implications, and actionable recommendations to both technical and non-technical stakeholders.

Scope

Included Topics

  • All domains and objectives in the ISACA Advanced in AI Audit (AAIA) exam: Domain 1 AI Governance and Risk (33%), Domain 2 AI Operations (46%), and Domain 3 AI Auditing Tools and Techniques (21%).
  • Advanced-level AI audit knowledge including AI governance frameworks, AI risk assessment, AI ethics and responsible AI, data governance for AI, and AI regulatory compliance.
  • AI governance: AI strategy alignment, AI policy development, ethical AI frameworks, AI model governance, AI risk appetite, AI-related regulatory requirements (EU AI Act, NIST AI RMF), and stakeholder management.
  • AI operations: AI model lifecycle management, training data management, model validation and testing, bias detection, explainability assessment, AI deployment governance, and operational monitoring of AI systems.
  • AI auditing: audit planning for AI systems, testing AI controls, evidence collection for AI audits, AI-specific sampling methodologies, data analytics in AI audit, reporting on AI risks, and AI-enabled audit tools.
  • Privacy and data governance for AI: training data privacy, consent management for AI processing, data quality assurance, data lineage tracking, and privacy-preserving AI techniques.

Not Covered

  • General IT audit procedures not specific to AI systems (covered by CISA).
  • AI security management and operational security controls (covered by AAISM).
  • AI-specific risk management frameworks and risk program management (covered by AAIR).
  • Deep machine learning engineering and model architecture design beyond audit scope.
  • Vendor-specific AI platform administration and MLOps toolchain configuration.

Official Exam Page

Learn more at ISACA

Visit

AAIA is coming soon

Adaptive learning that maps your knowledge and closes your gaps.

Create Free Account to Be Notified

Trademark Notice

ISACA®, CISA®, CISM®, CRISC®, CGEIT®, and CDPSE® are registered trademarks of ISACA. ISACA does not endorse this product.

AccelaStudy® and Renkara® are registered trademarks of Renkara Media Group, Inc. All third-party marks are the property of their respective owners and are used for nominative identification only.