This course is in active development. Preview the scope below and create a free account to be notified the moment it goes live.
AI Technical Practitioner
Students learn AI and ML basics, Cisco AI infrastructure, model lifecycle, security, ethics, and networking, gaining practical skills to deploy and manage enterprise AI workloads on Cisco platforms.
Who Should Take This
Mid‑level IT professionals such as network engineers, data‑center operators, or MLOps specialists with 1–2 years of experience who aim to integrate AI solutions into Cisco environments should enroll. They seek to validate expertise, expand knowledge of Cisco AI services, and advance careers in AI‑driven networking.
What's Covered
1
All domains in the Cisco AI Technical Practitioner (810-110) exam: AI and ML Fundamentals
2
, Cisco AI Infrastructure
3
, AI Model Lifecycle
4
, AI Security and Ethics
5
, AI Networking
6
, and AI Platforms and Solutions
What's Included in AccelaStudy® AI
Course Outline
60 learning goals
1
Domain 1: AI and ML Fundamentals
3 topics
Machine learning paradigms and algorithms
- Identify supervised, unsupervised, semi-supervised, and reinforcement learning paradigms by their training data requirements, feedback mechanisms, and representative algorithm families.
- Describe common ML algorithms including linear regression, decision trees, random forests, support vector machines, k-means clustering, and principal component analysis by their use cases and data type suitability.
- Explain model evaluation metrics including accuracy, precision, recall, F1 score, AUC-ROC, and confusion matrices to assess classification model performance for enterprise AI applications.
- Analyze overfitting and underfitting patterns using training and validation loss curves to recommend appropriate regularization, data augmentation, or model complexity adjustments.
Deep learning and neural network architectures
- Describe neural network fundamental components including neurons, layers, activation functions, loss functions, and backpropagation as the basis for deep learning model training.
- Identify convolutional neural network architectures and their application to computer vision tasks including image classification, object detection, and semantic segmentation in enterprise AI deployments.
- Describe transformer architecture including self-attention mechanisms, positional encoding, encoder-decoder structures, and their role as the foundation for large language models and generative AI applications.
- Explain how model parameter count, context window size, and quantization levels affect GPU memory requirements and inference latency for deploying large language models on Cisco AI infrastructure.
Generative AI and foundation models
- Identify generative AI model categories including large language models, diffusion models, variational autoencoders, and generative adversarial networks by their generation modality and training approach.
- Explain retrieval-augmented generation architecture including document chunking, embedding generation, vector database storage, semantic search, and context injection to ground LLM responses in enterprise knowledge bases.
- Analyze the trade-offs between fine-tuning foundation models, prompt engineering, and RAG approaches for adapting pre-trained models to enterprise-specific tasks considering cost, accuracy, and data requirements.
2
Domain 2: Cisco AI Infrastructure
2 topics
GPU compute infrastructure
- Identify GPU server configurations for AI workloads including multi-GPU server form factors, GPU memory specifications, NVLink interconnect topologies, and NVSwitch architectures used in Cisco AI compute platforms.
- Explain GPU compute cluster sizing considerations including GPU count per training job, memory bandwidth requirements, interconnect bandwidth needs, and storage throughput for different AI workload profiles.
- Describe Cisco UCS X-Series and C-Series GPU server platforms including modular GPU accelerator options, PCIe Gen5 bandwidth capabilities, and cooling requirements for high-density AI compute deployments.
- Analyze GPU utilization metrics, memory allocation patterns, and thermal throttling indicators to identify compute bottlenecks and optimize resource allocation in multi-tenant AI infrastructure environments.
AI storage infrastructure
- Identify storage architecture requirements for AI workloads including high-throughput parallel file systems, object storage for training datasets, and low-latency NVMe storage for model checkpointing.
- Explain data pipeline architecture for AI training including data lake ingestion, ETL processing, feature store management, and training data distribution across GPU nodes to minimize I/O bottlenecks.
- Analyze storage throughput and IOPS requirements for different AI workload phases including data preprocessing, distributed training with checkpointing, and high-concurrency inference serving.
- Describe GPU Direct Storage technology that enables direct data transfer between NVMe storage and GPU memory bypassing CPU staging buffers to accelerate data loading for AI training pipelines on Cisco compute platforms.
3
Domain 3: AI Model Lifecycle
3 topics
Training and fine-tuning
- Describe distributed training strategies including data parallelism, model parallelism, pipeline parallelism, and tensor parallelism by their GPU memory and communication requirements for large model training.
- Explain fine-tuning techniques including full fine-tuning, LoRA, QLoRA, and adapter methods for customizing foundation models on domain-specific data while managing GPU memory and compute costs.
- Apply hyperparameter optimization techniques including learning rate scheduling, batch size tuning, and early stopping criteria to improve training efficiency and model convergence on GPU cluster infrastructure.
- Analyze training job performance metrics including GPU utilization, communication overhead, gradient synchronization latency, and throughput samples per second to identify distributed training bottlenecks.
Model deployment and inference
- Describe model serving architectures including batch inference, real-time inference, streaming inference, and edge inference by their latency characteristics and infrastructure requirements.
- Explain model optimization techniques for inference including quantization to INT8/INT4, pruning, knowledge distillation, and ONNX runtime optimization to reduce latency and GPU memory consumption.
- Apply containerized model deployment using Docker, Kubernetes, and GPU scheduling to package, distribute, and scale inference services across AI infrastructure with resource isolation and autoscaling.
- Analyze inference performance characteristics including tokens per second, time to first token, concurrent request handling, and GPU memory fragmentation to right-size inference infrastructure for production SLAs.
MLOps and model governance
- Describe MLOps pipeline components including experiment tracking, model versioning, automated testing, CI/CD for models, and model registry management for reproducible AI development workflows.
- Explain model monitoring practices including data drift detection, prediction quality degradation, feature distribution shifts, and automated retraining triggers for maintaining model accuracy in production.
- Analyze model governance requirements including audit trail completeness, model lineage tracking, approval workflows, and regulatory compliance documentation for enterprise AI model lifecycle management.
4
Domain 4: AI Security and Ethics
2 topics
AI security threats and defenses
- Identify AI-specific security threats including adversarial examples, data poisoning, model extraction, membership inference, and prompt injection attacks by their attack vectors and potential business impact.
- Explain defense mechanisms against AI attacks including input validation, adversarial training, differential privacy, model watermarking, and output filtering for protecting enterprise AI deployments.
- Apply infrastructure security controls for AI environments including network segmentation of GPU clusters, API authentication for inference endpoints, secrets management for model access, and data encryption at rest and in transit.
- Analyze AI system attack surfaces by mapping data ingestion points, model serving endpoints, training pipeline access controls, and supply chain dependencies to prioritize security hardening efforts.
- Explain supply chain security practices for AI models including provenance verification for pre-trained weights, hash validation of downloaded model artifacts, and secure model registry access controls to prevent tampered model deployment.
AI ethics and responsible AI
- Identify responsible AI principles including fairness, transparency, accountability, privacy, and safety as defined by Cisco and industry AI ethics frameworks for enterprise AI deployments.
- Explain bias detection and mitigation techniques including training data auditing, demographic parity testing, equalized odds evaluation, and model debiasing methods for ensuring fair AI outcomes.
- Explain model explainability techniques including SHAP values, LIME, attention visualization, and feature importance analysis for providing interpretable AI decisions to stakeholders and auditors.
- Analyze data privacy requirements for AI workloads including GDPR and CCPA compliance implications for training data collection, model memorization risks, and right-to-erasure challenges in enterprise AI systems.
5
Domain 5: AI Networking
3 topics
RDMA and high-performance networking
- Describe RDMA technology fundamentals including zero-copy data transfer, kernel bypass, and memory registration as the foundation for high-performance GPU-to-GPU communication in AI training clusters.
- Identify RoCEv2 network requirements including priority flow control, explicit congestion notification, DSCP marking, and lossless Ethernet configuration for reliable RDMA transport over Cisco Nexus fabrics.
- Explain GPU Direct RDMA technology that enables direct memory access between GPU memory and network adapters, bypassing host CPU and system memory to minimize data transfer latency in distributed training.
- Analyze RoCEv2 versus InfiniBand trade-offs for AI cluster interconnects considering bandwidth, latency, congestion management, multi-tenancy support, and operational complexity on Cisco network infrastructure.
Spine-leaf fabric design for AI clusters
- Describe spine-leaf network topology design for AI GPU clusters including non-blocking fabric requirements, equal-cost multipath load balancing, and oversubscription ratio considerations for all-to-all traffic patterns.
- Explain Cisco Nexus 9000 series switch capabilities for AI networking including 400G and 800G port density, deep buffer options, adaptive routing, and VXLAN-EVPN fabric support for GPU cluster connectivity.
- Apply AI cluster network sizing calculations to determine spine switch count, leaf switch radix, and total bisection bandwidth needed for a given GPU count and collective communication pattern.
- Analyze network congestion patterns in AI training workloads including all-reduce collective operations, gradient synchronization bursts, and incast scenarios to recommend appropriate buffer sizing and congestion management.
AI cluster network operations
- Describe network health monitoring for AI clusters including fabric utilization tracking, link error detection, ECN marking rates, and PFC pause frame analysis for maintaining training job reliability.
- Explain network failure impact on distributed training including job checkpoint recovery, gradient staleness from slow nodes, and ring topology degradation when links or switches fail in the GPU cluster fabric.
- Analyze network telemetry data from AI cluster fabrics to correlate training job performance degradation with specific network events including link flaps, congestion episodes, and path asymmetry.
6
Domain 6: AI Platforms and Solutions
3 topics
Cisco AI platforms
- Describe Cisco Hypershield architecture including distributed security enforcement, AI-native threat detection, autonomous segmentation, and self-qualifying updates for protecting AI infrastructure and workloads.
- Describe Cisco Nexus AI fabric solutions including pre-validated reference architectures for GPU cluster networking with specific switch configurations, cabling plans, and QoS policies optimized for AI workloads.
- Explain Cisco Intersight integration for AI infrastructure management including server lifecycle automation, firmware compliance, workload optimization, and unified visibility across hybrid AI compute environments.
- Describe Cisco Validated Designs for AI including reference architectures that combine Nexus switching, UCS compute, and third-party GPU and storage components into tested and documented deployment blueprints for enterprise AI infrastructure.
AI ecosystem integration
- Describe the NVIDIA AI Enterprise software stack including CUDA, cuDNN, TensorRT, and Triton Inference Server as they integrate with Cisco compute and networking platforms for end-to-end AI workflows.
- Explain Kubernetes-based AI platform integration including GPU device plugins, node selectors, resource quotas, and network policies for running AI training and inference workloads on Cisco infrastructure.
- Analyze AI infrastructure reference architecture options including on-premises GPU clusters, hybrid cloud AI platforms, and managed AI services to recommend deployment models based on data sovereignty, cost, and performance requirements.
AI use cases and business value
- Identify enterprise AI use cases including network anomaly detection, IT operations automation, customer service chatbots, predictive maintenance, and document processing by their infrastructure requirements and expected business outcomes.
- Explain how Cisco AI-powered networking features including ThousandEyes AI-driven diagnostics, Cisco AI Assistant for Webex, and Meraki AI-based analytics leverage AI models to automate network and collaboration operations.
- Analyze total cost of ownership for enterprise AI infrastructure including GPU compute costs, power and cooling overhead, network fabric investment, storage capacity, and operational staffing to build business cases for AI projects.
Scope
Included Topics
- All domains in the Cisco AI Technical Practitioner (810-110) exam: AI and ML Fundamentals (20%), Cisco AI Infrastructure (20%), AI Model Lifecycle (20%), AI Security and Ethics (15%), AI Networking (15%), and AI Platforms and Solutions (10%).
- Associate-level AI and ML knowledge including supervised and unsupervised learning, neural network architectures, transformer models, training workflows, fine-tuning techniques, inference optimization, and model deployment patterns.
- Key Cisco AI infrastructure topics: GPU cluster networking, RDMA over Converged Ethernet (RoCEv2), InfiniBand interconnects, GPU Direct RDMA, spine-leaf fabric design for AI/ML workloads, Nexus 9000 series for AI clusters, and Cisco Hypershield.
- AI model lifecycle management including data preparation, distributed training, hyperparameter optimization, model evaluation, containerized deployment, inference serving, monitoring, and model governance.
- AI security and ethical considerations including adversarial attacks, model poisoning, data privacy, bias detection, explainability requirements, and responsible AI frameworks relevant to enterprise AI deployments.
Not Covered
- Advanced mathematical proofs and research-level deep learning theory beyond practical understanding needed for infrastructure and deployment decisions.
- Non-Cisco network equipment configuration and third-party AI platform administration details outside Cisco ecosystem integration points.
- Custom ASIC and chip-level GPU architecture internals beyond understanding performance characteristics relevant to infrastructure sizing.
- Academic AI research methodologies, paper writing conventions, and experimental design processes not applicable to enterprise AI practitioner work.
- Current GPU pricing, Cisco product SKUs, and rapidly changing cloud instance costs not durable for a long-lived domain specification.
Official Exam Page
Learn more at Cisco Systems
810-110 is coming soon
Adaptive learning that maps your knowledge and closes your gaps.
Create Free Account to Be Notified