🚀 Launch Special: $29/mo for life --d --h --m --s Claim Your Price →
AIF-C01

AI Practitioner

The certification teaches core AI/ML concepts, generative AI fundamentals, foundation model applications, responsible AI practices, and security, compliance, and governance on AWS, enabling practitioners to build and evaluate AI solutions.

120
Minutes
65
Questions
720/1000
Passing Score
$150
Exam Cost
6
Languages

Who Should Take This

It is designed for IT professionals, developers, or data analysts who have general technical experience but no prior machine‑learning background. They seek to gain baseline fluency in AI/ML and generative AI on AWS to support project teams, ensure responsible AI deployment, and meet security and governance requirements.

What's Covered

1 Explain basic AI and ML concepts including supervised, unsupervised, and reinforcement learning paradigms, and the end-to-end ML development lifecycle.
2 Describe generative AI concepts including foundation models, large language models, transformers, tokens, embeddings, and multimodal generation.
3 Identify use cases and design patterns for foundation models including prompt engineering, RAG, fine-tuning, and agent-based architectures on AWS.
4 Describe responsible AI principles including fairness, transparency, explainability, robustness, privacy, toxicity detection, and bias mitigation.
5 Identify security and governance practices for AI workloads including IAM, data protection, model endpoint security, and compliance frameworks.

Exam Structure

Question Types

  • Multiple Choice
  • Multiple Response

Scoring Method

Scaled scoring from 100 to 1000, minimum passing score of 700

Delivery Method

Pearson VUE testing center or online proctored

Recertification

Recertify every 3 years by passing the current exam or earning a higher-level AWS certification.

What's Included in AccelaStudy® AI

Adaptive Knowledge Graph
Practice Questions
Lesson Modules
Console Simulator Labs
Exam Tips & Strategy
20 Activity Formats

Course Outline

72 learning goals
1 Domain 1: Fundamentals of AI and ML
4 topics

Explain basic AI concepts and terminologies

  • Define artificial intelligence, machine learning, and deep learning and explain how they relate as nested disciplines within the AI field.
  • Distinguish between supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning by their data requirements and feedback mechanisms.
  • Identify common ML model types including regression, classification, clustering, anomaly detection, and recommendation and explain when each is appropriate.
  • Apply knowledge of AI problem types to map business problem statements to suitable learning paradigms and model categories.

Identify practical use cases for AI and ML

  • Identify practical AI use cases across industries including natural language processing, computer vision, forecasting, personalization, and fraud detection scenarios.
  • Determine when to use AI/ML versus traditional rule-based approaches by evaluating data availability, problem complexity, and expected business value.
  • Evaluate use case feasibility by weighing data quality, labeling effort, implementation risk, operational cost, and organizational readiness constraints.

Describe the ML development lifecycle

  • Describe the stages of the ML development lifecycle including business problem framing, data collection, data preparation, feature engineering, model training, evaluation, deployment, and monitoring.
  • Explain data preparation activities including data cleaning, handling missing values, feature selection, data splitting into training/validation/test sets, and the role of labeled data.
  • Identify model evaluation concepts including accuracy, precision, recall, F1 score, AUC-ROC, confusion matrices, overfitting, underfitting, and cross-validation techniques.
  • Analyze lifecycle bottlenecks and determine corrective actions for data quality issues, model drift, retraining triggers, and continuous monitoring requirements.

Identify AWS AI/ML services for common use cases

  • Identify AWS AI services for vision tasks including Amazon Rekognition for image/video analysis, Amazon Textract for document extraction, and their common use cases.
  • Identify AWS AI services for language tasks including Amazon Comprehend for NLP, Amazon Translate, Amazon Transcribe, Amazon Polly, and Amazon Lex for conversational interfaces.
  • Identify AWS services for structured data tasks including Amazon Personalize for recommendations, Amazon Forecast for time-series prediction, and Amazon Kendra for intelligent search.
  • Apply knowledge of AWS AI service capabilities to select the appropriate managed service for a given business scenario and data type.
2 Domain 2: Fundamentals of Generative AI
4 topics

Explain foundational concepts of generative AI

  • Define foundational generative AI concepts including foundation models, large language models, tokens, embeddings, context windows, and probabilistic text generation.
  • Explain how transformer architecture enables generative AI through attention mechanisms, self-attention, and sequence-to-sequence learning at a conceptual level.
  • Distinguish between generative AI model types including text generation (LLMs), image generation (diffusion models), code generation, and multimodal models by their input/output modalities.
  • Explain how model inference parameters including temperature, top-p, top-k, and max tokens influence the randomness, creativity, and length of generated outputs.

Understand capabilities and limitations of generative AI

  • Identify capabilities of generative AI including content creation, summarization, translation, code generation, conversational interaction, and knowledge extraction.
  • Identify limitations and risks of generative AI including hallucinations, factual inaccuracy, training data cutoff, context window limits, and non-deterministic outputs.
  • Assess the suitability of generative AI for a given business scenario by balancing output quality expectations against hallucination risk, grounding needs, and cost constraints.

Describe AWS generative AI services and infrastructure

  • Describe Amazon Bedrock capabilities including model access, model selection from multiple providers, playground experimentation, and managed API invocation for foundation models.
  • Describe Amazon Q capabilities for enterprise productivity including Q Business for knowledge retrieval, Q Developer for code assistance, and their integration patterns.
  • Describe Amazon SageMaker capabilities for building, training, and deploying ML models including SageMaker Studio, JumpStart for foundation models, and managed training infrastructure.
  • Identify AWS compute infrastructure options for AI workloads including GPU instances, AWS Trainium, AWS Inferentia, and their suitability for training versus inference tasks.
  • Apply knowledge of AWS generative AI services to select the appropriate service (Bedrock, SageMaker JumpStart, Amazon Q) based on use case requirements, customization needs, and operational complexity.
  • Analyze infrastructure tradeoffs for generative AI workloads by comparing throughput, latency, cost, and scaling characteristics across Bedrock on-demand, provisioned throughput, and self-hosted SageMaker endpoints.

Understand foundation model training and pre-training concepts

  • Explain how foundation models are pre-trained on large datasets using self-supervised learning and why pre-training enables transfer learning across downstream tasks.
  • Describe the relationship between model size, training data volume, compute requirements, and model capabilities including emergent abilities and scaling laws at a conceptual level.
3 Domain 3: Applications of Foundation Models
5 topics

Identify design considerations for foundation model applications

  • Identify key design considerations for foundation model applications including model selection criteria, cost management, latency requirements, and context strategy.
  • Explain retrieval-augmented generation (RAG) architecture including knowledge base indexing, vector stores, embedding-based retrieval, and context injection to ground model responses in domain data.
  • Describe Amazon Bedrock Knowledge Bases for implementing RAG including data source connectors, vector store options (OpenSearch Serverless, Pinecone), chunking strategies, and retrieval configuration.
  • Describe Amazon Bedrock Agents for building autonomous AI workflows including action groups, knowledge base integration, and multi-step task orchestration patterns.
  • Analyze architecture tradeoffs among direct model invocation, RAG-augmented responses, agent-based orchestration, and fine-tuned models for given solution requirements.

Choose effective prompt engineering techniques

  • Identify prompt engineering techniques including zero-shot prompting, few-shot prompting, chain-of-thought prompting, role prompting, and system prompt configuration.
  • Apply prompt design best practices including clear instructions, structured output formatting, delimiter usage, step-by-step guidance, and context provision to improve response quality.
  • Apply negative prompting and guardrail instructions to constrain model outputs, prevent harmful content generation, and enforce response boundaries.
  • Assess prompt effectiveness by iterating on prompt design, analyzing failure modes, and tuning inference parameters to improve response relevance, factuality, and consistency.

Describe training and fine-tuning processes for foundation models

  • Distinguish between model adaptation approaches including continued pre-training, instruction fine-tuning, domain-specific fine-tuning, and parameter-efficient fine-tuning (LoRA, adapters).
  • Describe Amazon Bedrock custom model training including fine-tuning workflows, training data preparation in JSONL format, hyperparameter configuration, and provisioned model throughput.
  • Explain the role of reinforcement learning from human feedback (RLHF) in aligning foundation model outputs with human preferences and safety requirements.
  • Evaluate data quality and governance requirements for fine-tuning workflows including data representativeness, labeling accuracy, data volume needs, and privacy/licensing considerations.
  • Analyze when to use prompt engineering, RAG, or fine-tuning by evaluating task complexity, data availability, latency constraints, cost, and maintenance requirements.

Describe methods to evaluate foundation model performance

  • Identify evaluation metrics for foundation models including ROUGE, BLEU, BERTScore, perplexity, human evaluation criteria, and task-specific accuracy measures.
  • Describe Amazon Bedrock model evaluation capabilities including automatic evaluation, human evaluation workflows, and benchmark comparison across available foundation models.
  • Apply evaluation methods to assess model quality across dimensions including accuracy, relevance, coherence, toxicity, and safety for a given application context.
  • Analyze evaluation results to determine whether a foundation model application meets production-readiness criteria and identify areas requiring prompt tuning, RAG enhancement, or fine-tuning.

Identify AWS services for foundation model application integration

  • Describe how to integrate foundation model outputs into applications using Amazon Bedrock APIs, streaming responses, and embedding generation for downstream processing.
  • Describe how AWS Lambda, API Gateway, and Step Functions can orchestrate foundation model invocations within serverless application architectures.
  • Analyze application integration patterns to select the appropriate orchestration approach based on latency, cost, reliability, and workflow complexity requirements.
4 Domain 4: Guidelines for Responsible AI
4 topics

Recognize responsible AI development principles

  • Identify core responsible AI dimensions including fairness, transparency, privacy, robustness, safety, veracity, and accountability as defined by AWS responsible AI principles.
  • Explain how human oversight, feedback loops, and escalation processes contribute to responsible AI system operation and risk management.
  • Apply responsible AI principles to identify potential harms including societal impact, individual privacy violations, and discriminatory outcomes in AI system design.

Identify and mitigate bias in AI systems

  • Identify sources of bias in AI systems including training data bias, selection bias, measurement bias, algorithmic bias, and confirmation bias in evaluation.
  • Describe bias detection and mitigation techniques including Amazon SageMaker Clarify for pre-training and post-training bias metrics, disparate impact analysis, and balanced dataset curation.
  • Assess dataset and model risk factors and determine appropriate mitigation actions for bias, misuse potential, and harmful output generation.

Recognize the importance of transparent and explainable AI

  • Distinguish between transparent AI (inherently interpretable models) and explainable AI (post-hoc explanations for opaque models) and identify when each approach is required.
  • Describe explainability tools and techniques including SageMaker Clarify feature attributions, SHAP values, model cards, and documentation practices that support stakeholder trust.
  • Evaluate explainability requirements based on stakeholder needs, regulatory context, and risk level and select appropriate communication methods for technical and non-technical audiences.

Implement toxicity and harmful content controls

  • Describe Amazon Bedrock Guardrails for content filtering including configurable content filters, denied topic policies, word filters, and PII redaction capabilities.
  • Apply content moderation strategies to detect and prevent harmful, toxic, or inappropriate content in generative AI application inputs and outputs.
5 Domain 5: Security, Compliance, and Governance for AI Solutions
3 topics

Identify methods to secure AI systems

  • Identify IAM security controls for AI services including Bedrock access policies, SageMaker execution roles, resource-based policies, and least-privilege access patterns for model invocation.
  • Describe data protection mechanisms for AI workloads including encryption at rest for training data, encryption in transit for model API calls, and VPC endpoint isolation for Bedrock and SageMaker.
  • Apply security best practices to protect model endpoints including API throttling, authentication, input validation, and monitoring for unauthorized access patterns.
  • Analyze AI-specific threat scenarios including prompt injection attacks, training data poisoning, model extraction, data leakage through model outputs, and recommend layered defense strategies.

Recognize governance and compliance for AI systems

  • Identify governance frameworks and regulatory considerations for AI including data residency, intellectual property rights, privacy regulations (GDPR, CCPA), and industry-specific compliance requirements.
  • Describe AWS governance tools for AI compliance including AWS CloudTrail for API audit logging, AWS Config for resource compliance, and Amazon Macie for sensitive data discovery in AI training datasets.
  • Apply data governance practices for AI including data lineage tracking, data classification, retention policies, and consent management for training data sourcing.
  • Determine governance operating practices that maintain policy adherence, model documentation quality, audit readiness, and continuous compliance across the AI model lifecycle.

Implement monitoring and logging for AI workloads

  • Describe monitoring capabilities for AI services including Amazon CloudWatch metrics for Bedrock and SageMaker, model invocation logging, and usage tracking for cost and performance visibility.
  • Analyze monitoring data to detect model performance degradation, anomalous usage patterns, and security incidents in AI workloads and determine appropriate response actions.

Hands-On Labs

15 labs ~300 min total Console Simulator

Practice in a simulated cloud console or Python code sandbox — no account needed. Each lab runs entirely in your browser.

Certification Benefits

Salary Impact

$88,000
Average Salary

Related Job Roles

AI/ML Business Analyst Cloud Solutions Manager AI Project Manager Technical Product Manager

Industry Recognition

The AWS AI Practitioner certification validates foundational AI and generative AI literacy on AWS, positioning holders at the forefront of enterprise AI adoption. As organizations rapidly integrate AI services, this certification demonstrates competency in evaluating and governing AI solutions.

Scope

Included Topics

  • All domains and task statements in the AWS Certified AI Practitioner (AIF-C01) exam guide: Domain 1 Fundamentals of AI and ML (20%), Domain 2 Fundamentals of Generative AI (24%), Domain 3 Applications of Foundation Models (28%), Domain 4 Guidelines for Responsible AI (14%), and Domain 5 Security, Compliance, and Governance for AI Solutions (14%).
  • Foundational AI and ML concepts including supervised learning, unsupervised learning, reinforcement learning, deep learning, inference pipelines, and the end-to-end ML development lifecycle.
  • Generative AI concepts including foundation models, large language models, transformers, tokens, embeddings, context windows, diffusion models, and multimodal generation.
  • AWS AI/ML services including Amazon SageMaker, Amazon Bedrock, Amazon Q, Amazon Rekognition, Amazon Textract, Amazon Comprehend, Amazon Polly, Amazon Transcribe, Amazon Translate, Amazon Personalize, Amazon Forecast, Amazon Kendra, Amazon Lex, and Amazon CodeWhisperer.
  • Prompt engineering techniques, retrieval-augmented generation (RAG), foundation model fine-tuning, model evaluation, and agent-based architectures on AWS.
  • Responsible AI principles including fairness, transparency, explainability, robustness, privacy, toxicity detection, and bias mitigation.
  • Security and governance for AI workloads including IAM for AI services, data protection, model endpoint security, content filtering, compliance frameworks, and AWS AI governance tooling.

Not Covered

  • Deep model training implementation mathematics, gradient descent algorithm derivations, and neural network architecture internals beyond foundational practitioner scope.
  • Research-level model architecture design and specialized machine learning engineering workflows expected in ML Specialty or Solutions Architect Professional certifications.
  • Transient service pricing details and rapidly changing public benchmark values that are not stable for a long-lived domain specification.
  • Non-AWS provider toolchains, third-party MLOps platforms, and governance frameworks that do not map to AWS AI service usage patterns in AIF-C01.
  • Hands-on CLI commands, SDK code implementations, and infrastructure-as-code templates beyond conceptual understanding.

Official Exam Page

Learn more at Amazon Web Services

Visit

Ready to master AIF-C01?

Adaptive learning that maps your knowledge and closes your gaps.

Subscribe to Access

Trademark Notice

AWS, Amazon Web Services, and all related names, logos, product and service names, designs and slogans are trademarks of Amazon.com, Inc. or its affiliates. Amazon does not endorse this product.

AccelaStudy® and Renkara® are registered trademarks of Renkara Media Group, Inc. All third-party marks are the property of their respective owners and are used for nominative identification only.