Summer Special 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: bestdeal

Free Confluent CCDAK Practice Exam with Questions & Answers

Questions 1

You have a topic t1 with six partitions. You use Kafka Connect to send data from topic t1 in your Kafka cluster to Amazon S3. Kafka Connect is configured for two tasks.

How many partitions will each task process?

Options:
A.

2

B.

3

C.

6

D.

12

Confluent CCDAK Premium Access
Questions 2

You need to set alerts on key broker metrics to trigger notifications when the cluster is unhealthy.

Which are three minimum broker metrics to monitor?

(Select three.)

Options:
A.

kafka.controller:type=KafkaController,name=TopicsToDeleteCount

B.

kafka.controller:type=KafkaController,name=OfflinePartitionsCount

C.

kafka.controller:type=KafkaController,name=ActiveControllerCount

D.

kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec

E.

kafka.controller:type=KafkaController,name=LastCommittedRecordOffset

Questions 3

You use Kafka Connect with the JDBC source connector to extract data from a large database and push it into Kafka.

The database contains tens of tables, and the current connector is unable to process the data fast enough.

You add more Kafka Connect workers, but throughput doesn't improve.

What should you do next?

Options:
A.

Increase the number of Kafka partitions for the topics.

B.

Increase the value of the connector's property tasks.max.

C.

Add more Kafka brokers to the cluster.

D.

Modify the database schemas to enable horizontal sharding.

Questions 4

Which statement is true about how exactly-once semantics (EOS) work in Kafka Streams?

Options:
A.

Kafka Streams disables log compaction on internal changelog topics to preserve all state changes for potential recovery.

B.

EOS in Kafka Streams relies on transactional producers to atomically commit state updates to changelog topics and output records to Kafka.

C.

Kafka Streams provides EOS by periodically checkpointing state stores and replaying changelogs to recover only unprocessed messages during failure.

D.

EOS in Kafka Streams is implemented by creating a separate Kafka topic for deduplication of all messages processed by the application.

Questions 5

Your configuration parameters for a Source connector and Connect worker are:

    offset.flush.interval.ms=60000

    offset.flush.timeout.ms=500

    offset.storage.topic=connect-offsets

    offset.storage.replication.factor=-1Which four statements match the expected behavior?(Select four.)

Options:
A.

The connector will wait 60000ms before trying to commit offsets for tasks.

B.

The connector will wait 500ms for offset data to be committed.

C.

The connector will commit offsets to a topic called connect-offsets.

D.

The offsets topic will use the broker default replication factor.

Questions 6

Which tool can you use to modify the replication factor of an existing topic?

Options:
A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

Questions 7

What is the default maximum size of a message the Apache Kafka broker can accept?

Options:
A.

1MB

B.

2MB

C.

5MB

D.

10MB

Questions 8

Your application is consuming from a topic with one consumer group.

The number of running consumers is equal to the number of partitions.

Application logs show that some consumers are leaving the consumer group during peak time, triggering a rebalance. You also notice that your application is processing many duplicates.

You need to stop consumers from leaving the consumer group.

What should you do?

Options:
A.

Reduce max.poll.records property.

B.

Increase session.timeout.ms property.

C.

Add more consumer instances.

D.

Split consumers in different consumer groups.

Questions 9

Where are source connector offsets stored?

Options:
A.

offset.storage.topic

B.

storage.offset.topic

C.

topic.offset.config

D.

offset, storage, partitions

Questions 10

You want to connect with username and password to a secured Kafka cluster that has SSL encryption.

Which properties must your client include?

Options:
A.

security.protocol=SASL_SSL

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

B.

security.protocol=SSL

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

C.

security.protocol=SASL_PLAINTEXT

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

D.

security.protocol=PLAINTEXT

sasl.jaas.config=org.apache.kafka.common.security.ssl.TlsLoginModule required username='myUser' password='myPassword';

Exam Code: CCDAK
Certification Provider: Confluent
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Last Update: Jul 15, 2025
Questions: 61
PDF + Testing Engine
$164.99
$66
Testing Engine
$124.99
$50
PDF (Q&A)
$104.99
$42

Confluent Free Exams

Confluent Free Exams
Prepare effectively for Confluent certification exams with free study resources and practice tests from Examstrack.