Error handling
Neo4j Connector for Kafka sink instance supports Kafka Connect error handling mechanism to deal with bad incoming data. In order to make use of this feature, the configuration settings described in the Table 1, “Dead Letter Queue configuration parameters” can be used.
Name | Description | ||
---|---|---|---|
|
Configures error tolerance during the sink process.
The default value is One of
|
||
|
If set to true, each error, along with details of the failed operation and problematic message, will be written to the Kafka Connect application log. By default, this is set to 'false', so only errors that are not tolerated are reported. One of |
||
|
Specifies whether to include the Kafka message that resulted in a failure in the log. If enabled, the topic, partition, offset, and timestamp will be logged. By default, this is set to 'false', preventing message keys, values, and headers from being written to log files. One of |
||
|
Specifies the topic name to be used as the dead letter queue (DLQ) for messages that encounter errors during the sink process. When a topic name is set, failed messages will be sent to the DLQ. By default, the topic name is blank, indicating that no messages will be sent to the DLQ. Default: `` |
||
|
If set to true, headers containing error context will be added to the messages sent to the DLQ topic.
To prevent conflicts with headers from the original record, all error context header keys will start with The error headers that will be sent;
One of |
||
|
Specifies the replication factor used to create the dead letter queue (DLQ) topic if it does not already exist.
Default value Default: |
With this version of the connector, we also handle errors that may occur during message processing and when writing messages to the target Neo4j database. For further details about how Kafka Connect Framework handles error management, please look into the blog post at Error Handling and Dead Letter Queues. |