🚀 Launch Special: $29/mo for life --d --h --m --s Claim Your Price →
Coming Soon
Expected availability announced soon

This course is in active development. Preview the scope below and create a free account to be notified the moment it goes live.

Notify me
Confluent Coming Soon

CCDAK Kafka Developer

The course teaches developers the core concepts of Apache Kafka, including producer/consumer APIs, Kafka Streams, Connect, schema management, and observability, enabling them to build reliable, scalable streaming applications.

Who Should Take This

Software engineers, backend developers, or data engineers who have at least one year of experience writing Java, Scala, or Python services and want to integrate event‑driven architectures will benefit. They aim to earn the Confluent Certified Developer for Apache Kafka credential, demonstrating proficiency in producer/consumer code, stream processing, connector configuration, and operational monitoring.

What's Included in AccelaStudy® AI

Adaptive Knowledge Graph
Practice Questions
Lesson Modules
Console Simulator Labs
Exam Tips & Strategy
20 Activity Formats

Course Outline

60 learning goals
1 Kafka Fundamentals
1 topic

Architecture and Core Concepts

  • Describe the Apache Kafka architecture including brokers, clusters, topics, partitions, segments, and the role of ZooKeeper and KRaft controllers
  • Explain topic partitioning strategies and describe how partition count, replication factor, and partition assignment affect throughput and fault tolerance
  • Describe the Kafka replication protocol including in-sync replicas (ISR), leader election, unclean leader election, and the minimum ISR configuration
  • Explain Kafka log storage including log segments, log compaction, retention policies (time-based and size-based), and the impact of cleanup.policy settings
  • Describe the Kafka consumer group protocol including group coordinator, consumer group rebalancing, partition assignment strategies, and static group membership
  • Analyze the impact of different acks settings (0, 1, all) and min.insync.replicas on producer durability guarantees and throughput trade-offs
  • Compare at-most-once, at-least-once, and exactly-once delivery semantics in Kafka and describe the configuration requirements for each guarantee level
  • Describe Kafka broker configuration parameters including num.partitions, default.replication.factor, log.retention.hours, and message.max.bytes
  • Describe the KRaft consensus protocol and explain how it eliminates the ZooKeeper dependency for cluster metadata management
  • Analyze partition count planning and recommend the optimal number of partitions based on throughput targets, consumer parallelism, and cluster resource constraints
  • Describe Kafka message structure including key, value, headers, timestamp, and offset and explain how each field is used in message routing and processing
  • Explain Kafka topic configuration parameters including max.message.bytes, retention.ms, segment.bytes, and cleanup.policy and their impact on storage and performance
2 Application Development
3 topics

Producer Development

  • Configure Kafka producers including bootstrap.servers, key.serializer, value.serializer, acks, retries, batch.size, and linger.ms for optimal performance
  • Implement custom partitioners and describe how the default partitioner assigns records to partitions based on key hashing and round-robin strategies
  • Configure idempotent producers and transactional producers to achieve exactly-once semantics with enable.idempotence and transactional.id settings
  • Implement producer interceptors and callbacks to monitor delivery status, handle errors, and collect metrics for production monitoring
  • Analyze producer batching and compression trade-offs and recommend optimal batch.size, linger.ms, and compression.type settings for different throughput and latency requirements
  • Implement error handling and retry strategies for Kafka producers including handling retriable and non-retriable exceptions and configuring delivery.timeout.ms

Consumer Development

  • Configure Kafka consumers including group.id, auto.offset.reset, enable.auto.commit, max.poll.records, and session.timeout.ms for reliable message consumption
  • Implement manual offset management using commitSync and commitAsync and describe strategies for handling consumer failures and reprocessing scenarios
  • Implement consumer rebalance listeners to handle partition assignment and revocation for graceful consumer group membership changes
  • Analyze consumer lag patterns and recommend configuration adjustments for max.poll.interval.ms, max.poll.records, and fetch.min.bytes to optimize throughput
  • Implement consumer seek operations to reset consumer position for reprocessing scenarios including seekToBeginning, seekToEnd, and offset-based seeks
  • Describe cooperative versus eager consumer rebalancing protocols and evaluate the impact of incremental cooperative rebalancing on application availability

Serialization and Schema Registry

  • Implement custom serializers and deserializers for complex data types and describe the Avro, Protobuf, and JSON Schema serialization formats
  • Configure the Confluent Schema Registry including subject naming strategies, compatibility modes (backward, forward, full, none), and schema evolution rules
  • Analyze schema evolution scenarios and recommend the appropriate compatibility mode to allow consumer and producer upgrades without breaking changes
  • Implement Avro schemas with complex types including records, enums, arrays, maps, and unions for structured message serialization
  • Describe the Schema Registry REST API and implement programmatic schema registration, retrieval, and compatibility checking
3 Kafka Streams
1 topic

Stream Processing and Topology

  • Describe the Kafka Streams architecture including stream processing topology, KStream and KTable abstractions, and the relationship between partitions and stream tasks
  • Implement stateless transformations using filter, map, flatMap, branch, merge, and peek operations on KStream objects
  • Implement stateful transformations including count, reduce, and aggregate operations using KGroupedStream and KGroupedTable
  • Configure windowing operations including tumbling, hopping, sliding, and session windows for time-based aggregation of streaming data
  • Implement KStream-KTable and KStream-KStream joins and describe the co-partitioning requirements and join semantics for each join type
  • Describe state stores in Kafka Streams including RocksDB-backed stores, in-memory stores, changelog topics, and standby replicas for fault tolerance
  • Analyze a streaming data processing requirement and recommend the appropriate Kafka Streams topology including transformation chain, state management, and windowing strategy
  • Configure Kafka Streams application properties including application.id, state.dir, num.stream.threads, and processing.guarantee for production deployment
  • Implement interactive queries using the ReadOnlyKeyValueStore and ReadOnlyWindowStore interfaces to serve real-time state from Kafka Streams applications
  • Describe exactly-once semantics in Kafka Streams including processing.guarantee, transaction support, and the impact on application performance
  • Apply the Processor API for low-level stream processing including custom processors, state store access, and punctuation scheduling for advanced use cases
  • Analyze Kafka Streams application scaling and describe how partition-based parallelism, thread allocation, and state store restoration affect throughput and recovery time
4 Kafka Connect
1 topic

Connectors and Data Pipelines

  • Describe the Kafka Connect architecture including workers (standalone and distributed), connectors, tasks, and converters
  • Configure source connectors to ingest data from external systems into Kafka topics including JDBC, file, and Debezium CDC connectors
  • Configure sink connectors to export data from Kafka topics to external systems including JDBC, Elasticsearch, S3, and HDFS connectors
  • Configure Single Message Transforms (SMTs) to modify records in-flight including field renaming, filtering, timestamp routing, and value masking
  • Describe connector configuration properties including tasks.max, key.converter, value.converter, and error handling with errors.tolerance and dead letter queues
  • Analyze a data pipeline requirement and recommend the appropriate Kafka Connect deployment mode, connector type, and transformation chain
  • Describe the difference between standalone and distributed Connect deployment modes and evaluate when each is appropriate based on fault tolerance and scalability needs
  • Configure connector monitoring using the Connect REST API for health checks, status inspection, and task restart operations
  • Implement connector offset management and describe how to reset consumer offsets for source and sink connectors during recovery scenarios
  • Implement custom Single Message Transforms by extending the Transformation interface for domain-specific record enrichment and filtering
5 Application Observability
1 topic

Monitoring, Security, and Troubleshooting

  • Describe Kafka monitoring metrics including broker metrics (UnderReplicatedPartitions, ActiveControllerCount), producer metrics (record-send-rate), and consumer metrics (records-lag-max)
  • Configure application logging for Kafka producers, consumers, and Kafka Streams applications using Log4j and SLF4J frameworks
  • Describe Kafka security features including SSL/TLS encryption, SASL authentication mechanisms (PLAIN, SCRAM, GSSAPI), and ACL-based authorization
  • Analyze common Kafka production issues including broker failures, consumer rebalance storms, producer timeouts, and partition skew and recommend remediation steps
  • Describe Kafka performance tuning strategies including partition count optimization, batch size tuning, compression selection (gzip, snappy, lz4, zstd), and JVM configuration
  • Configure Kafka ACLs using the kafka-acls tool to manage topic-level and consumer-group-level authorization for multi-tenant environments
  • Describe Kafka quotas including produce quotas, consume quotas, and request rate quotas for multi-tenant cluster resource management
  • Implement distributed tracing for Kafka applications using record headers to propagate trace context across producer-consumer chains
  • Analyze Kafka cluster health using JMX metrics and recommend alerting thresholds for under-replicated partitions, offline partitions, and consumer group lag

Scope

Included Topics

  • All topics in the Confluent Certified Developer for Apache Kafka (CCDAK) exam: Kafka Fundamentals (architecture, brokers, topics, partitions, replication), Application Development (producers, consumers, serialization, Schema Registry), Kafka Streams (stateless/stateful transforms, windowing, joins), Kafka Connect (source/sink connectors, SMTs, data pipelines), and Application Observability (monitoring, logging, security, troubleshooting).
  • Exactly-once semantics, idempotent producers, transactional messaging, consumer group management, Avro/Protobuf serialization, schema evolution, KStream/KTable processing, state stores, and Kafka Connect distributed mode deployment.

Not Covered

  • Confluent Cloud specific administration and management console features
  • Kafka cluster operations, broker installation, and infrastructure management
  • ksqlDB beyond awareness of its relationship to Kafka Streams
  • Kafka MirrorMaker and multi-datacenter replication architectures
  • Confluent Control Center configuration and management

CCDAK Kafka Developer is coming soon

Adaptive learning that maps your knowledge and closes your gaps.

Create Free Account to Be Notified

Trademark Notice

Apache®, Apache Kafka®, and the Kafka logo are registered trademarks of the Apache Software Foundation. Confluent® is a registered trademark of Confluent, Inc.

AccelaStudy® and Renkara® are registered trademarks of Renkara Media Group, Inc. All third-party marks are the property of their respective owners and are used for nominative identification only.