This is an automated email from the ASF dual-hosted git repository.

cegerton pushed a commit to branch fix-connect-anchor-links
in repository https://gitbox.apache.org/repos/asf/kafka-site.git

commit fe275ca1c8f9907286010c751f59b6fb91309b10
Author: Chris Egerton <[email protected]>
AuthorDate: Fri Feb 10 10:41:18 2023 -0500

    MINOR: Fix anchor links in Connect docs
    
    Missing `#` at the beginning of links leads to 404s when users click on 
them.
---
 33/connect.html | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/33/connect.html b/33/connect.html
index 4d14b606..fa3f511c 100644
--- a/33/connect.html
+++ b/33/connect.html
@@ -377,7 +377,7 @@ errors.tolerance=all</pre>
 
     <p>If a sink connector supports exactly-once semantics, to enable 
exactly-once at the Connect worker level, you must ensure its consumer group is 
configured to ignore records in aborted transactions. You can do this by 
setting the worker property <code>consumer.isolation.level</code> to 
<code>read_committed</code> or, if running a version of Kafka Connect that 
supports it, using a <a 
href="#connectconfigs_connector.client.config.override.policy">connector client 
config override polic [...]
 
-    <h5><a id="connect_exactlyoncesource" 
href="connect_exactlyoncesource">Source connectors</a></h5>
+    <h5><a id="connect_exactlyoncesource" 
href="#connect_exactlyoncesource">Source connectors</a></h5>
 
     <p>If a source connector supports exactly-once semantics, you must 
configure your Connect cluster to enable framework-level support for 
exactly-once source connectors. Additional ACLs may be necessary if running 
against a secured Kafka cluster. Note that exactly-once support for source 
connectors is currently only available in distributed mode; standalone Connect 
workers cannot provide exactly-once semantics.</p>
 
@@ -641,7 +641,7 @@ public abstract class SinkTask implements Task {
     <p>The <code>flush()</code> method is used during the offset commit 
process, which allows tasks to recover from failures and resume from a safe 
point such that no events will be missed. The method should push any 
outstanding data to the destination system and then block until the write has 
been acknowledged. The <code>offsets</code> parameter can often be ignored, but 
is useful in some cases where implementations want to store offset information 
in the destination store to provide ex [...]
     delivery. For example, an HDFS connector could do this and use atomic move 
operations to make sure the <code>flush()</code> operation atomically commits 
the data and offsets to a final location in HDFS.</p>
 
-    <h5><a id="connect_errantrecordreporter" 
href="connect_errantrecordreporter">Errant Record Reporter</a></h5>
+    <h5><a id="connect_errantrecordreporter" 
href="#connect_errantrecordreporter">Errant Record Reporter</a></h5>
 
     <p>When <a href="#connect_errorreporting">error reporting</a> is enabled 
for a connector, the connector can use an <code>ErrantRecordReporter</code> to 
report problems with individual records sent to a sink connector. The following 
example shows how a connector's <code>SinkTask</code> subclass might obtain and 
use the <code>ErrantRecordReporter</code>, safely handling a null reporter when 
the DLQ is not enabled or when the connector is installed in an older Connect 
runtime that doesn [...]
 

Reply via email to