This course is in active development. Preview the scope below and create a free account to be notified the moment it goes live.
CCNP Data Center Core
The CCNP Data Center Core (DCCOR 350-601) course teaches networking, compute, storage networking, automation, and security for Cisco data center solutions, enabling engineers to design, implement, and troubleshoot enterprise-grade infrastructures.
Who Should Take This
Data center engineers with three to five years of experience who design, deploy, or manage Cisco UCS, Nexus, and storage platforms should enroll. The certification validates expertise in networking, compute, storage, automation, and security, positioning them for senior technical roles and advanced project responsibilities.
What's Covered
1
All domains in the Cisco CCNP Data Center Core (DCCOR 350-601) exam: Networking
2
, Compute
3
, Storage Networking
4
, Automation
5
, and Security
What's Included in AccelaStudy® AI
Course Outline
60 learning goals
1
Domain 1: Networking
6 topics
NX-OS switching fundamentals
- Configure Cisco Nexus switch VLANs, trunking, and Layer 2 port channels using NX-OS CLI and feature sets to establish the data center access layer switching foundation.
- Implement NX-OS platform features including VDCs, CoPP, and NX-OS ISSU to provide device virtualization, control plane protection, and non-disruptive software upgrades on Nexus platforms.
vPC and multi-chassis link aggregation
- Configure vPC domain, peer link, and peer keepalive between Nexus switch pairs to provide active-active redundancy for downstream servers and switches without STP blocked ports.
- Analyze vPC failure scenarios including peer link failure, keepalive failure, and orphan ports to predict forwarding behavior and assess data plane impact during component outages.
- Implement vPC with Layer 3 routing protocols, configuring peer-gateway and IP ARP synchronization to support routed traffic forwarding through the vPC domain without suboptimal paths.
Data center interconnect
- Configure OTV to extend Layer 2 domains between geographically separated data centers over a Layer 3 transport while preserving STP isolation and failure domain boundaries.
- Compare OTV, VXLAN DCI, and traditional dark fiber approaches to recommend the optimal data center interconnect technology for a given latency, bandwidth, and stretch requirements.
VXLAN and EVPN fabric
- Implement VXLAN data plane encapsulation on Nexus switches to create overlay networks that decouple tenant segments from physical topology constraints in leaf-spine architectures.
- Configure BGP EVPN as the VXLAN control plane on Nexus leaf and spine switches to distribute MAC/IP host information and enable distributed anycast gateway for optimal traffic forwarding.
- Analyze VXLAN/EVPN route type advertisements (Type-2 MAC/IP, Type-5 IP Prefix) to evaluate host reachability propagation and troubleshoot overlay forwarding failures.
- Design a multi-site VXLAN/EVPN fabric with border gateways and multi-site extensions to interconnect data center pods while maintaining independent failure domains and control plane isolation.
Data center routing
- Configure OSPF and IS-IS as underlay routing protocols in leaf-spine data center fabrics, tuning timers and BFD for sub-second failure detection and convergence.
- Implement eBGP underlay routing in a Clos fabric topology using unique ASN per leaf to achieve equal-cost multipath forwarding across all available spine paths.
- Evaluate underlay routing protocol selection between OSPF, IS-IS, and eBGP for data center fabrics based on scalability, convergence speed, and operational complexity tradeoffs.
Data center QoS
- Configure NX-OS QoS policies with system-level queuing, network-qos, and class maps to provide lossless ethernet for storage traffic and priority treatment for latency-sensitive applications.
- Analyze data center QoS requirements for FCoE lossless classes, iSCSI traffic, and VM migration to assess whether network congestion policies adequately protect storage and compute workloads.
2
Domain 2: Compute
5 topics
UCS architecture and management
- Implement Cisco UCS domains with fabric interconnects, IOM connections, and chassis discovery to establish the unified compute infrastructure for blade and rack server deployment.
- Configure UCS Manager organizations, roles, and locales to implement multi-tenant administrative domains with role-based access control across compute pools and policies.
- Apply Cisco Intersight cloud management platform to monitor, manage, and automate firmware updates across UCS domains, C-Series standalone servers, and HyperFlex clusters.
- Evaluate UCS power and thermal policies including power capping, fan speed control, and N+1 power redundancy to assess infrastructure resilience and optimize energy efficiency across the compute domain.
Service profiles and templates
- Configure UCS service profiles with identity pools (UUID, MAC, WWNN, WWPN) and policies (boot, BIOS, network, storage) to define portable server identities independent of physical hardware.
- Implement UCS service profile templates with updating and initial template binding to standardize server configurations and enable rapid provisioning across large blade populations.
- Design a service profile template hierarchy that balances standardization with flexibility, planning pool sizing, policy inheritance, and template versioning for enterprise-scale UCS deployments.
Server boot and connectivity
- Configure UCS boot policies for SAN boot, local disk boot, iSCSI boot, and PXE boot to define server startup sequences that align with storage architecture and OS deployment strategies.
- Implement UCS LAN and SAN connectivity policies including vNICs, vHBAs, QoS system classes, and fabric failover to ensure redundant multi-path connectivity between servers and network/storage fabrics.
HyperFlex and hyperconverged infrastructure
- Deploy Cisco HyperFlex clusters with HX Data Platform to converge compute, storage, and networking into a single managed infrastructure for virtualized workloads.
- Evaluate HyperFlex cluster sizing including node count, disk configuration, and replication factor to assess capacity, performance, and resiliency for given workload profiles.
- Compare traditional three-tier (separate compute/network/storage) architectures with HyperFlex hyperconverged deployments to recommend the optimal infrastructure model for specific workload requirements.
Firmware and lifecycle management
- Implement UCS firmware management using host firmware packages, management firmware packages, and maintenance policies to orchestrate non-disruptive upgrades across chassis and servers.
- Analyze UCS firmware compatibility matrices and upgrade path dependencies to plan safe infrastructure firmware rollouts that avoid version incompatibilities between FI, IOM, and blade components.
3
Domain 3: Storage Networking
4 topics
Fibre Channel fundamentals
- Configure Cisco MDS switches with VSANs, FC interfaces, and domain IDs to build isolated Fibre Channel fabrics that provide multi-tenant SAN segmentation without physical separation.
- Implement FC zoning on MDS switches using device-alias-based zoning and smart zoning to control host-to-storage access and reduce RSCN disruption scope across SAN fabrics.
- Analyze FC login sequences (FLOGI, PLOGI, PRLI) and name server registrations to troubleshoot host-to-storage connectivity failures and FCID allocation issues on MDS fabrics.
FCoE and unified fabric
- Configure FCoE on Nexus switches with VFC interfaces, DCBX, PFC, and ETS to converge Fibre Channel and Ethernet traffic over a single lossless Ethernet infrastructure.
- Evaluate FCoE versus native FC deployment tradeoffs including performance, complexity, and troubleshooting to recommend the optimal SAN connectivity model for new data center builds.
iSCSI and IP-based storage
- Configure iSCSI initiators and targets on UCS servers with dedicated storage VLANs, jumbo frames, and multipathing to provide IP-based block storage access without Fibre Channel infrastructure.
- Compare FC, FCoE, iSCSI, and NVMe-oF storage protocols to recommend the optimal protocol selection based on performance requirements, existing infrastructure, and total cost of ownership.
- Implement NVMe over Fabrics (NVMe-oF) with RoCEv2 on Nexus switches to provide ultra-low-latency storage access for high-performance workloads requiring sub-microsecond I/O response times.
SAN fabric design and management
- Implement MDS inter-VSAN routing and FC port channeling to provide controlled inter-fabric connectivity and link aggregation for high-bandwidth storage traffic between SAN islands.
- Design a dual-fabric SAN architecture with A/B fabric isolation, predictable oversubscription ratios, and ISL trunk planning to meet enterprise storage availability and performance requirements.
4
Domain 4: Automation
3 topics
Cisco ACI fabric architecture
- Deploy Cisco ACI fabric with APIC controllers, spine/leaf topology discovery, and infrastructure VLAN to establish the policy-driven data center networking foundation.
- Configure ACI tenants, VRFs, bridge domains, and endpoint groups with contracts and filters to implement application-centric network policies for multi-tenant workload segmentation.
- Implement ACI external connectivity using L3Out with BGP/OSPF peering and route control policies to integrate the ACI fabric with existing enterprise and data center networks.
- Analyze ACI policy model object relationships and endpoint learning behavior to troubleshoot contract enforcement, endpoint mobility, and inter-EPG connectivity issues.
- Design an ACI multi-pod or multi-site architecture to extend policy-driven fabric across data center locations while maintaining independent control planes and consistent tenant policies.
Data center network management
- Apply Cisco Nexus Dashboard Fabric Controller (NDFC/DCNM) to manage VXLAN/EVPN fabrics with automated underlay provisioning, overlay network creation, and fabric health monitoring.
- Compare ACI APIC-managed and NDFC-managed (standalone NX-OS) data center fabric approaches to recommend the optimal management model based on organizational requirements and operational maturity.
Programmability and APIs
- Implement Python scripts using NX-API REST and CLI interfaces to automate NX-OS device configuration, operational data collection, and compliance validation across Nexus switch fleets.
- Configure Ansible playbooks with NX-OS and ACI modules to deploy standardized data center configurations, tenant policies, and VXLAN networks through declarative infrastructure-as-code workflows.
- Apply ACI REST API and Python SDK (cobra/acitoolkit) to programmatically create tenants, configure EPGs, and manage contracts for automated application onboarding workflows.
- Design a data center automation strategy integrating controller APIs, configuration management tools, and CI/CD pipelines to achieve repeatable, auditable infrastructure provisioning at scale.
5
Domain 5: Security
4 topics
Data center network segmentation
- Configure ACI contracts with subjects, filters, and directives to enforce whitelist security policies between EPGs, controlling east-west traffic within the data center fabric.
- Implement micro-segmentation using ACI uSeg EPGs and intra-EPG isolation to enforce granular access controls between endpoints sharing the same network segment.
- Design a data center zero-trust segmentation strategy using ACI contracts, service graph firewalls, and micro-segmentation to limit lateral threat movement between application tiers.
Data center firewalling
- Implement ACI service graphs with Cisco Firepower or ASA to insert firewall services into traffic flows between EPGs for stateful inspection of inter-tier application traffic.
- Evaluate data center firewall placement strategies including north-south perimeter, east-west inter-tier, and service insertion to assess security coverage and performance impact tradeoffs.
Management plane security
- Configure AAA, RBAC, and management access policies on NX-OS and ACI to secure administrative access to data center infrastructure using TACACS+ and local fallback authentication.
- Implement NX-OS and MDS management plane hardening including SSH key-based authentication, SNMPv3, TLS for NX-API, and CoPP to protect data center devices from unauthorized management access.
Data center security monitoring
- Implement ACI health score monitoring and fault correlation to detect security policy violations, endpoint anomalies, and contract misconfiguration across the data center fabric.
- Design a data center security operations framework integrating ACI health monitoring, syslog aggregation, and Nexus Dashboard Insights for proactive threat detection and compliance validation.
Scope
Included Topics
- All domains in the Cisco CCNP Data Center Core (DCCOR 350-601) exam: Networking (25%), Compute (25%), Storage Networking (20%), Automation (15%), and Security (15%).
- Data center networking technologies including NX-OS platform features, vPC, FabricPath, OTV, VXLAN with BGP EVPN, data center interconnect, and Cisco Nexus switch families.
- Data center compute technologies including Cisco UCS architecture, B-Series and C-Series servers, HyperFlex hyperconverged infrastructure, service profiles, server boot policies, firmware management, and UCS Manager/Intersight.
- Storage networking technologies including Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), iSCSI, Cisco MDS switch configuration, zoning, FLOGI/PLOGI, VSAN, and storage protocols.
- Data center automation including Cisco ACI fabric architecture, APIC controller, tenant/VRF/BD/EPG model, Nexus Dashboard (DCNM/NDFC), Python scripting, Ansible for NX-OS, and REST API integration.
- Data center security including network segmentation with contracts and filters, firewalling in ACI, micro-segmentation, role-based access control, and management plane security.
Not Covered
- Enterprise campus networking technologies (SD-Access, SD-WAN, wireless LAN) that are covered by the ENCOR exam rather than DCCOR.
- Service provider MPLS, segment routing, and carrier ethernet technologies covered by the SPCOR exam.
- Deep storage array administration, SAN fabric design beyond Cisco MDS, and third-party storage vendor specifics not tested on DCCOR.
- Application development, container orchestration internals, and DevOps pipeline depth beyond what is required for data center infrastructure automation.
Official Exam Page
Learn more at Cisco Systems
350-601 is coming soon
Adaptive learning that maps your knowledge and closes your gaps.
Create Free Account to Be Notified