Professional Cloud Architect
Learn to design, plan, and implement robust Google Cloud solutions, covering architecture, infrastructure provisioning, security, compliance, and performance optimization for enterprise and hybrid environments at scale.
Who Should Take This
It is intended for senior cloud engineers, solution architects, or technical leads who have at least three years of hands‑on experience designing, migrating, and managing Google Cloud workloads across hybrid or multi‑cloud environments. These professionals seek to validate their expertise, advance to a recognized Google Cloud Professional Cloud Architect credential, and lead complex enterprise cloud initiatives.
What's Covered
1
Designing compute, storage, and networking infrastructure; planning migrations; creating solution blueprints that meet business and technical requirements.
2
Configuring network topologies, individual storage systems, and compute systems; provisioning infrastructure using IaC tools.
3
Designing for identity and access management, regulatory compliance, and security controls including encryption, VPC Service Controls, and organizational policies.
4
Analyzing and defining technical and business processes; developing procedures to ensure reliability of solutions; optimizing performance and cost.
5
Advising development and operations teams; interacting with Google Cloud to implement solutions using programmatic and manual approaches.
6
Monitoring and logging; deploying and managing changes; ensuring operational reliability through SLIs, SLOs, and incident response processes.
Exam Structure
Question Types
- Multiple Choice
- Multiple Select
Scoring Method
Pass/fail. Google does not publish a scaled score or passing percentage.
Delivery Method
Kryterion testing center or online proctored
Prerequisites
None required. Associate Cloud Engineer recommended. 3+ years industry experience.
Recertification
3 years
What's Included in AccelaStudy® AI
Course Outline
80 learning goals
1
Domain 1: Designing and Planning a Cloud Solution Architecture
4 topics
Designing a solution infrastructure that meets business requirements
- Design a cost-optimized cloud architecture using committed use discounts, sustained use discounts, preemptible and Spot VMs, and resource right-sizing to meet budget constraints while satisfying performance requirements.
- Design a system and application architecture using microservices decomposition, event-driven patterns with Pub/Sub, and managed services selection to achieve modularity, scalability, and operational efficiency.
- Analyze integration patterns using Cloud Endpoints, Apigee API Management, Pub/Sub messaging, and Cloud Tasks to evaluate coupling tradeoffs, versioning strategies, and throttling controls for heterogeneous system connectivity.
- Analyze data migration options among Transfer Service, Transfer Appliance, Database Migration Service, and Datastream to select the optimal strategy based on data volume, downtime tolerance, and consistency requirements.
- Analyze regulatory and compliance requirements including HIPAA, PCI DSS, and GDPR to determine applicable controls for data residency, audit logging, and access governance aligned with organizational compliance obligations.
- Design networking architectures using VPC design, Cloud Load Balancing, Cloud CDN, and Cloud DNS to optimize traffic routing, minimize latency, and enforce network segmentation aligned with application topology.
- Recommend a solution architecture that balances service-level objectives, error budgets, cost constraints, and business continuity requirements with explicit tradeoff rationale for stakeholder alignment.
Designing a solution infrastructure that meets technical requirements
- Design high-availability architectures using regional managed instance groups, multi-zonal GKE clusters, Cloud SQL HA configurations, and Spanner multi-region instances to eliminate single points of failure.
- Design scalable architectures using horizontal autoscaling, Cloud Run concurrency controls, Bigtable node scaling, and BigQuery slot reservations to handle variable workload demands without manual intervention.
- Analyze disaster recovery architectures across cold, warm, and hot standby patterns using cross-region replication, Cloud SQL replicas, and GCS dual-region buckets to evaluate RTO, RPO, and cost tradeoffs.
- Recommend an infrastructure design that balances recovery objectives, scalability headroom, capacity overhead, and cost for workloads with differing criticality tiers and availability requirements.
Designing network, storage, and compute resources
- Design load balancing architectures using global HTTP(S) load balancers, regional network load balancers, internal load balancers, and Traffic Director to distribute traffic across backends with health checking and session affinity.
- Analyze data flow patterns and select storage technologies among Cloud Storage, Cloud SQL, Spanner, Firestore, Bigtable, and BigQuery based on access patterns, consistency models, throughput, and cost characteristics.
- Analyze compute provisioning options among Compute Engine instance families, sole-tenant nodes, GPUs, and TPUs to select the optimal configuration based on workload performance profiles, isolation requirements, and licensing constraints.
- Design container orchestration architectures using GKE with node pools, cluster autoscaler, Workload Identity, and network policies to run microservices workloads with isolation and resource efficiency.
- Recommend an integrated compute, storage, and networking architecture that optimizes resource allocation, data locality, and network throughput for a multi-tier application with heterogeneous workload characteristics.
Creating a migration plan
- Evaluate existing on-premises and cloud workloads using the migration framework assess phase to classify applications by migration pattern including rehost, replatform, refactor, repurchase, retire, and retain.
- Recommend a wave-based migration plan using dependency mapping, risk scoring, and business priority ranking to sequence workload groups for phased migration with rollback procedures and success criteria.
- Design migration architectures using Migrate for Compute Engine for VM rehosting, Migrate for GKE for container workloads, and Transfer Service for bulk data movement with validation and cutover controls.
- Design hybrid migration architectures using Anthos for workload portability across on-premises, GCP, and multi-cloud environments with consistent policy enforcement, service mesh integration, and config management.
- Recommend a migration governance strategy that integrates wave sequencing, hybrid workload placement, post-migration validation, and organizational readiness to minimize risk and maximize business value delivery.
2
Domain 2: Managing and Provisioning a Solution Infrastructure
4 topics
Configuring network topologies
- Configure Shared VPC architectures with host and service projects, subnet-level IAM permissions, and firewall rules to enable centralized network administration across organizational project boundaries.
- Configure VPC peering connections with route exchange, transitive routing limitations, and network address planning to interconnect project networks without overlapping CIDR ranges.
- Configure hybrid connectivity using Cloud VPN with HA VPN tunnels and dynamic routing, and Cloud Interconnect with Dedicated or Partner links for predictable bandwidth and latency to on-premises data centers.
- Analyze multi-region and multi-NIC network topologies to evaluate connectivity requirements, security isolation boundaries, routing complexity, and hybrid integration patterns for enterprise network design.
- Recommend a network topology strategy that integrates Shared VPC, peering, VPN, and Interconnect based on organizational structure, latency requirements, bandwidth needs, and security posture.
Configuring compute systems
- Configure Compute Engine instances with custom machine types, managed instance groups, instance templates, and health checks to provision scalable and self-healing VM-based workloads.
- Configure GKE clusters with node auto-provisioning, Workload Identity, Binary Authorization, and network policies to run containerized applications with security and resource governance controls.
- Configure Cloud Run services and App Engine applications with revision-based traffic splitting, custom domains, VPC connectors, and concurrency settings for serverless workload deployment.
- Analyze compute platform tradeoffs across Compute Engine, GKE, Cloud Run, and App Engine to evaluate startup latency, scaling granularity, operational overhead, and portability for each workload type.
- Recommend a compute platform strategy that assigns workloads to the optimal compute service based on team maturity, deployment velocity, cost efficiency, and long-term operational sustainability.
Configuring storage systems
- Configure Cloud Storage buckets with storage classes, lifecycle rules, retention policies, versioning, and object-level access controls to manage unstructured data with cost-effective tiering.
- Configure Cloud SQL instances with read replicas, automated backups, point-in-time recovery, and private IP networking to provision managed relational databases for transactional workloads.
- Configure Cloud Spanner instances with regional and multi-regional configurations, interleaved tables, and secondary indexes to provision globally consistent relational databases for mission-critical workloads.
- Analyze storage system tradeoffs across Cloud Storage, Cloud SQL, Spanner, Firestore, Bigtable, and BigQuery to evaluate consistency guarantees, throughput limits, scaling characteristics, and cost profiles for each data store.
- Recommend a multi-database architecture strategy that assigns each data store to the optimal GCP storage service based on access patterns, consistency requirements, cost constraints, and operational maturity.
Configuring flexible infrastructure
- Configure infrastructure-as-code deployments using Terraform with GCP provider resources, modules, state backends, and workspaces for repeatable and version-controlled infrastructure provisioning.
- Configure Cloud Deployment Manager templates with Jinja2 and Python templating, type providers, and runtime configurators for declarative GCP resource management.
- Recommend a provisioning strategy that integrates Terraform, Config Connector, or Deployment Manager based on team capabilities, GitOps maturity, multi-cloud requirements, and long-term maintainability.
3
Domain 3: Designing for Security and Compliance
2 topics
Designing for security
- Design IAM architectures using custom roles, service accounts, workload identity federation, and organization policies to enforce least-privilege access across projects and services.
- Design a resource hierarchy using organizations, folders, and projects with inherited IAM bindings and organization policy constraints to enforce governance at scale.
- Design data security architectures using encryption at rest with customer-managed encryption keys in Cloud KMS, encryption in transit with TLS, and column-level security in BigQuery for defense-in-depth data protection.
- Analyze key management strategies using Cloud KMS key rings, rotation policies, envelope encryption, and Cloud HSM to evaluate tradeoffs between operational complexity, compliance requirements, and cryptographic control levels.
- Analyze credential and secret lifecycle management using Secret Manager automatic rotation, IAM-based access policies, and version management to evaluate security posture for application credentials and API keys.
- Analyze sensitive data exposure risks using DLP API inspection templates, de-identification transforms, and risk analysis to detect and prioritize PII mitigation actions across data stores.
- Recommend a layered security strategy integrating identity controls, network perimeters, data encryption, and application-level protections with defense-in-depth principles and residual risk treatment.
- Recommend a data protection governance strategy that aligns encryption key management, secret rotation, and DLP scanning policies with organizational risk tolerance and regulatory requirements across all data tiers.
Designing for compliance
- Analyze regulatory compliance frameworks including HIPAA, PCI DSS, GDPR, and SOC 2 to determine applicable controls, shared responsibility boundaries, and GCP compliance certifications for solution architecture.
- Design audit logging architectures using Cloud Audit Logs with admin activity, data access, and system event logs exported to Cloud Storage or BigQuery for long-term compliance retention and forensic analysis.
- Design data residency architectures using resource location constraints, organization policies, and regional storage configurations to restrict data processing and storage to specific geographic jurisdictions.
- Design VPC Service Controls perimeters with access levels, ingress and egress policies, and service perimeter bridges to prevent data exfiltration from sensitive GCP resources.
- Recommend a compliance governance strategy that integrates organization policies, VPC Service Controls, audit logging, and continuous monitoring to maintain regulatory compliance posture across evolving workloads.
4
Domain 4: Analyzing and Optimizing Technical and Business Processes
4 topics
Analyzing and defining technical processes
- Analyze software development lifecycle practices to evaluate source control branching strategies, code review workflows, and release cadence alignment with team velocity and quality objectives.
- Design CI/CD pipeline architectures using Cloud Build triggers, Artifact Registry for container and package management, and Cloud Deploy for delivery pipelines with approval gates and canary deployments.
- Analyze testing and validation coverage by evaluating unit test, integration test, load test, and security scan strategies within CI/CD pipelines to identify gaps in defect detection before production deployment.
- Design API management architectures using Apigee with API proxies, developer portals, rate limiting, and analytics to expose, secure, and monitor service interfaces for internal and external consumers.
- Design batch data processing architectures using Dataproc with autoscaling clusters, Dataflow batch pipelines, and BigQuery scheduled queries to transform and load data at scale with cost-efficient scheduling.
- Design streaming data processing architectures using Pub/Sub for ingestion, Dataflow streaming pipelines with windowing and watermarks, and BigQuery streaming inserts for real-time analytics workloads.
- Analyze data processing requirements to evaluate batch versus streaming architecture tradeoffs including latency, throughput, exactly-once semantics, windowing complexity, and cost for data pipeline workloads.
- Recommend a deployment strategy selecting among blue-green, canary, and rolling approaches with Cloud Deploy progression rules and rollback automation aligned to release risk tolerance and service-level objectives.
- Recommend a holistic technical process strategy integrating CI/CD maturity, testing automation, API governance, and data pipeline orchestration to improve engineering velocity and production stability.
Analyzing and defining business processes
- Analyze stakeholder requirements including executive sponsors, development teams, operations staff, and compliance officers to identify conflicting priorities and establish architecture decision criteria.
- Design change management processes for cloud adoption including communication plans, training programs, pilot rollouts, and feedback loops to minimize organizational resistance and accelerate adoption.
- Analyze team structure and skills gaps to evaluate organizational models including cloud center of excellence, embedded SRE, and platform engineering patterns that support cloud-native operational maturity.
- Analyze customer success measurement approaches using SLIs, SLOs, and business KPIs to evaluate how cloud architecture decisions deliver value to business stakeholders and identify improvement opportunities.
- Recommend a business process transformation strategy that aligns cloud adoption with organizational capabilities, change readiness, and measurable business outcomes across multiple stakeholder groups.
Developing procedures to ensure reliability of solutions in production
- Design monitoring architectures using Cloud Monitoring with custom metrics, dashboards, alerting policies, and uptime checks to provide comprehensive visibility into application and infrastructure health.
- Design centralized logging architectures using Cloud Logging with log sinks, log-based metrics, exclusion filters, and log routers to aggregate and analyze operational data across projects and services.
- Analyze application performance using Error Reporting for exception grouping, Cloud Trace for distributed latency analysis, and Cloud Profiler for hotspot identification to diagnose bottlenecks in production systems.
- Analyze SLI, SLO, and SLA frameworks by evaluating appropriate indicators for availability, latency, and throughput to determine measurable objectives and error budget policies for reliability governance.
- Analyze incident response effectiveness by evaluating on-call rotations, escalation paths, communication templates, and severity classification to identify improvements in mean time to detection and recovery.
- Analyze incident post-mortem data to identify systemic reliability issues, create action items with ownership and deadlines, and establish blameless post-mortem culture for continuous reliability improvement.
- Recommend a production reliability strategy integrating SLO-based alerting, error budget governance, chaos engineering validation, and incident learning loops to drive sustained reliability improvements.
Optimizing resources and managing operations
- Analyze cost optimization opportunities using Recommender API insights, committed use discount analysis, sustained use discount patterns, and resource labeling to identify waste and improve cost allocation visibility.
- Analyze capacity planning requirements using historical utilization data, forecasting models, and quota management to prevent resource exhaustion and ensure headroom for traffic spikes.
- Analyze tradeoffs between managed services and self-managed infrastructure to evaluate control, cost, operational burden, and team expertise requirements for each workload component.
- Design autoscaling architectures using managed instance group autoscalers, GKE horizontal and vertical pod autoscaling, and Cloud Run automatic scaling to match resource capacity to demand patterns efficiently.
- Analyze resource utilization patterns using Cloud Monitoring metrics, idle resource identification, and instance type benchmarks to determine right-sizing opportunities that reduce waste without compromising performance.
- Recommend a cost governance strategy that integrates FinOps practices, billing account structure, budget alerts, and engineering ownership to drive measurable financial accountability across project teams.
- Recommend a holistic operations optimization strategy integrating cost governance, capacity planning, autoscaling, right-sizing, and managed service adoption to achieve operational efficiency at enterprise scale.
Hands-On Labs
Practice in a simulated cloud console or Python code sandbox — no account needed. Each lab runs entirely in your browser.
Certification Benefits
Salary Impact
Related Job Roles
Industry Recognition
Google Cloud certifications are highly valued in data-driven and AI-focused organizations. The Professional Cloud Architect is consistently ranked among the highest-paying IT certifications globally by industry salary surveys, reflecting strong demand for GCP architecture expertise.
Scope
Included Topics
- All domains and task statements in the Google Cloud Professional Cloud Architect certification exam guide: Domain 1 Designing and Planning a Cloud Solution Architecture (24%), Domain 2 Managing and Provisioning a Solution Infrastructure (15%), Domain 3 Designing for Security and Compliance (18%), and Domain 4 Analyzing and Optimizing Technical and Business Processes (43%).
- Advanced architecture decisions for cloud solution design, infrastructure provisioning, security governance, compliance enforcement, and technical and business process optimization on Google Cloud Platform.
- Scenario-driven architectural tradeoff analysis integrating reliability, security, performance, cost optimization, and operational excellence across GCP managed services and hybrid environments.
- Key GCP services for professional-level architecture: Compute Engine, GKE, Cloud Run, App Engine, Cloud Functions, Cloud Storage, Cloud SQL, Cloud Spanner, Firestore, Bigtable, BigQuery, Pub/Sub, Dataflow, Dataproc, Cloud Composer, VPC, Shared VPC, Cloud VPN, Cloud Interconnect, Cloud Load Balancing, Cloud CDN, Cloud DNS, Cloud Armor, IAM, Resource Manager, Cloud KMS, Secret Manager, DLP API, VPC Service Controls, Cloud Monitoring, Cloud Logging, Error Reporting, Cloud Trace, Cloud Debugger, Cloud Deploy, Cloud Build, Artifact Registry, Terraform, Cloud Deployment Manager, Config Connector, Migrate for Compute Engine, Migrate for GKE, Transfer Service, Anthos, and Traffic Director.
Not Covered
- Low-level implementation coding detail, CLI command syntax, and hands-on scripting depth not required for architecture decision-making in the Professional Cloud Architect exam.
- Deeply specialized machine learning model engineering, data science pipeline tuning, and niche ML domain implementations that fall under the Professional Machine Learning Engineer certification.
- Current region-specific price points, temporary promotional pricing, and other rapidly changing commercial details not stable for enduring architecture specifications.
- Vendor-neutral cloud strategy content that does not map to GCP architecture choices and Professional Cloud Architect task statements.
Official Exam Page
Learn more at Google Cloud
Ready to master PCA?
Adaptive learning that maps your knowledge and closes your gaps.
Subscribe to Access