mayc0202 commented on issue #78:
URL: 
https://github.com/apache/doris-kafka-connector/issues/78#issuecomment-3043916383

   按照您的配置,我调整了连接器配置:{
       "connector.class": "org.apache.doris.kafka.connector.DorisSinkConnector",
       "tasks.max": "1",
       "doris.urls": "192.168.1.161",
       "sink.properties.partial_columns": "true",
       "sink.properties.strip_outer_array": "false",
       "doris.query.port": "9030",
       "connect.timeoutms": "5000",
       "consumer.override.fetch.max.wait.ms": "1000",
       "doris.database": "test_db",
       "doris.password": "123456",
       "consumer.override.max.poll.records": "10000",
       "converter.mode": "debezium_ingestion",
       "value.converter": "org.apache.kafka.connect.json.JsonConverter",
       "sink.properties.columns": 
"id,name,email,phone,gender,password,age,create_time,update_time",
       "key.converter": "org.apache.kafka.connect.storage.StringConverter",
       "buffer.size.bytes": "5000000",
       "doris.topic2table.map": "SERVER275_.xqh-ddtt.app_user2:app_user",
       "doris.http.port": "8030",
       "doris.user": "test_user",
       "database.time_zone": "Asia/Shanghai",
       "bufferflush.intervalms": "30000",
       "topics": "SERVER275_.xqh-ddtt.app_user2",
       "batch.size": "1000",
       "buffer.flush.time": "10",
       "consumer.override.fetch.min.bytes": "52428800",
       "enable.delete": "true",
       "key.converter.schemas.enable": "false",
       "sink.properties.primary_key": "id",
       "consumer.override.compression.type": "snappy",
       "buffer.count.records": "10000",
       "value.converter.schemas.enable": "true",
       "consumer.override.max.partition.fetch.bytes": "104857600"
   },还是没有同步删除操作。以下是kafka连接日志:[2025-07-07 16:03:09,155] INFO [_S275_|task-0] 
Kafka startTimeMs: 1751875389155 
(org.apache.kafka.common.utils.AppInfoParser:121)
   [2025-07-07 16:03:09,155] INFO [_D275_app_user_|task-0] ConsumerConfig 
values: 
        allow.auto.create.topics = true
        auto.commit.interval.ms = 5000
        auto.offset.reset = earliest
        bootstrap.servers = [node1:9092, node2:9092, node3:9092]
        check.crcs = true
        client.dns.lookup = use_all_dns_ips
        client.id = connector-consumer-_D275_app_user_-0
        client.rack = 
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = false
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 1000
        fetch.min.bytes = 52428800
        group.id = connect-_D275_app_user_
        group.instance.id = null
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        internal.throw.on.fetch.stable.offset.unsupported = false
        isolation.level = read_uncommitted
        key.deserializer = class 
org.apache.kafka.common.serialization.ByteArrayDeserializer
        max.partition.fetch.bytes = 104857600
        max.poll.interval.ms = 300000
        max.poll.records = 10000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor, class 
org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.connect.timeout.ms = null
        sasl.login.read.timeout.ms = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.login.retry.backoff.max.ms = 10000
        sasl.login.retry.backoff.ms = 100
        sasl.mechanism = GSSAPI
        sasl.oauthbearer.clock.skew.seconds = 30
        sasl.oauthbearer.expected.audience = null
        sasl.oauthbearer.expected.issuer = null
        sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
        sasl.oauthbearer.jwks.endpoint.url = null
        sasl.oauthbearer.scope.claim.name = scope
        sasl.oauthbearer.sub.claim.name = sub
        sasl.oauthbearer.token.endpoint.url = null
        security.protocol = PLAINTEXT
        security.providers = null
        send.buffer.bytes = 131072
        session.timeout.ms = 45000
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        value.deserializer = class 
org.apache.kafka.common.serialization.ByteArrayDeserializer
    (org.apache.kafka.clients.consumer.ConsumerConfig:376)
   [2025-07-07 16:03:09,156] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Subscribed to 
topic(s): SERVER275_.xqh-ddtt 
(org.apache.kafka.clients.consumer.KafkaConsumer:968)
   [2025-07-07 16:03:09,160] WARN [_D275_app_user_|task-0] The configuration 
'compression.type' was supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig:384)
   [2025-07-07 16:03:09,161] WARN [_D275_app_user_|task-0] The configuration 
'metrics.context.connect.group.id' was supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig:384)
   [2025-07-07 16:03:09,161] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Resetting the last 
seen epoch of partition SERVER275_.xqh-ddtt-0 to 0 since the associated topicId 
changed from null to g6w16daHQ2i5Lf3aRTJkaw 
(org.apache.kafka.clients.Metadata:402)
   [2025-07-07 16:03:09,161] WARN [_D275_app_user_|task-0] The configuration 
'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known 
config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)
   [2025-07-07 16:03:09,161] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Cluster ID: 
ry0v5XgDTdOF8sMIEb7Xhw (org.apache.kafka.clients.Metadata:287)
   [2025-07-07 16:03:09,161] INFO [_D275_app_user_|task-0] Kafka version: 3.2.1 
(org.apache.kafka.common.utils.AppInfoParser:119)
   [2025-07-07 16:03:09,162] INFO [_D275_app_user_|task-0] Kafka commitId: 
b172a0a94f4ebb9f (org.apache.kafka.common.utils.AppInfoParser:120)
   [2025-07-07 16:03:09,162] INFO [_D275_app_user_|task-0] Kafka startTimeMs: 
1751875389161 (org.apache.kafka.common.utils.AppInfoParser:121)
   [2025-07-07 16:03:09,166] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Subscribed to topic(s): SERVER275_.xqh-ddtt.app_user2 
(org.apache.kafka.clients.consumer.KafkaConsumer:968)
   [2025-07-07 16:03:09,166] INFO [_D275_app_user_|task-0] kafka doris sink 
task start with 
{connector.class=org.apache.doris.kafka.connector.DorisSinkConnector, 
behavior.on.null.values=IGNORE, tasks.max=1, max.retries=10, task_id=0, 
doris.urls=192.168.1.161, sink.properties.partial_columns=true, 
sink.properties.strip_outer_array=false, debezium.schema.evolution=none, 
doris.query.port=9030, connect.timeoutms=5000, 
consumer.override.fetch.max.wait.ms=1000, enable.combine.flush=false, 
delivery.guarantee=AT_LEAST_ONCE, jmx=true, doris.database=test_db, 
doris.password=123456, consumer.override.max.poll.records=10000, 
converter.mode=debezium_ingestion, 
value.converter=org.apache.kafka.connect.json.JsonConverter, 
sink.properties.columns=id,name,email,phone,gender,password,age,create_time,update_time,
 key.converter=org.apache.kafka.connect.storage.StringConverter, 
buffer.size.bytes=5000000, 
doris.topic2table.map=SERVER275_.xqh-ddtt.app_user2:app_user, 
doris.http.port=8030, doris.user=test_u
 ser, database.time_zone=Asia/Shanghai, bufferflush.intervalms=30000, 
topics=SERVER275_.xqh-ddtt.app_user2, batch.size=1000, buffer.flush.time=10, 
consumer.override.fetch.min.bytes=52428800, enable.delete=true, 
key.converter.schemas.enable=false, retry.interval.ms=6000, 
sink.properties.primary_key=id, consumer.override.compression.type=snappy, 
task.class=org.apache.doris.kafka.connector.DorisSinkTask, 
buffer.count.records=10000, value.converter.schemas.enable=true, 
name=_D275_app_user_, consumer.override.max.partition.fetch.bytes=104857600, 
load.model=STREAM_LOAD} (org.apache.doris.kafka.connector.DorisSinkTask:55)
   [2025-07-07 16:03:09,167] INFO [Worker clientId=connect-1, 
groupId=connect-cluster] Finished starting connectors and tasks 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:1406)
   [2025-07-07 16:03:09,168] INFO [_D275_app_user_|task-0] init 
DorisConnectMonitor, taskId=0 
(org.apache.doris.kafka.connector.metrics.DorisConnectMonitor:66)
   [2025-07-07 16:03:09,168] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Discovered group 
coordinator node1:9092 (id: 2147483647 rack: null) 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:884)
   [2025-07-07 16:03:09,168] INFO [_D275_app_user_|task-0] 
WorkerSinkTask{id=_D275_app_user_-0} Sink task finished initialization and 
start (org.apache.kafka.connect.runtime.WorkerSinkTask:313)
   [2025-07-07 16:03:09,169] INFO [_D275_app_user_|task-0] 
WorkerSinkTask{id=_D275_app_user_-0} Executing sink task 
(org.apache.kafka.connect.runtime.WorkerSinkTask:198)
   [2025-07-07 16:03:09,170] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:553)
   [2025-07-07 16:03:09,173] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Request joining 
group due to: need to re-join with the given member-id: 
SERVER275_-dbhistory-dd8bdbc6-997f-4d6f-a8a2-1e2f77460a20 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1050)
   [2025-07-07 16:03:09,173] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Request joining 
group due to: rebalance failed due to 'The group member needs to have a valid 
member id before actually entering a consumer group.' 
(MemberIdRequiredException) 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1050)
   [2025-07-07 16:03:09,174] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:553)
   [2025-07-07 16:03:09,175] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Resetting the last seen epoch of partition SERVER275_.xqh-ddtt.app_user2-0 to 0 
since the associated topicId changed from null to 1cDqOfd2QweaX3MmxeZT1g 
(org.apache.kafka.clients.Metadata:402)
   [2025-07-07 16:03:09,176] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Cluster ID: ry0v5XgDTdOF8sMIEb7Xhw (org.apache.kafka.clients.Metadata:287)
   [2025-07-07 16:03:09,176] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Successfully 
joined group with generation Generation{generationId=1, 
memberId='SERVER275_-dbhistory-dd8bdbc6-997f-4d6f-a8a2-1e2f77460a20', 
protocol='range'} 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:614)
   [2025-07-07 16:03:09,176] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Discovered group coordinator node1:9092 (id: 2147483647 rack: null) 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:884)
   [2025-07-07 16:03:09,177] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Finished 
assignment for group at generation 1: 
{SERVER275_-dbhistory-dd8bdbc6-997f-4d6f-a8a2-1e2f77460a20=Assignment(partitions=[SERVER275_.xqh-ddtt-0])}
 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:702)
   [2025-07-07 16:03:09,178] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
(Re-)joining group 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:553)
   [2025-07-07 16:03:09,181] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Successfully 
synced group in generation Generation{generationId=1, 
memberId='SERVER275_-dbhistory-dd8bdbc6-997f-4d6f-a8a2-1e2f77460a20', 
protocol='range'} 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:789)
   [2025-07-07 16:03:09,181] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Request joining group due to: need to re-join with the given member-id: 
connector-consumer-_D275_app_user_-0-a2b3f140-6e99-4e2b-ba89-fadb6286488c 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1050)
   [2025-07-07 16:03:09,181] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Notifying assignor 
about the new Assignment(partitions=[SERVER275_.xqh-ddtt-0]) 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:301)
   [2025-07-07 16:03:09,182] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Adding newly 
assigned partitions: SERVER275_.xqh-ddtt-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:313)
   [2025-07-07 16:03:09,181] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Request joining group due to: rebalance failed due to 'The group member needs 
to have a valid member id before actually entering a consumer group.' 
(MemberIdRequiredException) 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1050)
   [2025-07-07 16:03:09,182] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
(Re-)joining group 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:553)
   [2025-07-07 16:03:09,183] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Found no committed 
offset for partition SERVER275_.xqh-ddtt-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1515)
   [2025-07-07 16:03:09,185] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Resetting offset 
for partition SERVER275_.xqh-ddtt-0 to position FetchPosition{offset=0, 
offsetEpoch=Optional.empty, 
currentLeader=LeaderAndEpoch{leader=Optional[node2:9092 (id: 1 rack: null)], 
epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)
   [2025-07-07 16:03:09,186] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Successfully joined group with generation Generation{generationId=14, 
memberId='connector-consumer-_D275_app_user_-0-a2b3f140-6e99-4e2b-ba89-fadb6286488c',
 protocol='range'} 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:614)
   [2025-07-07 16:03:09,186] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Finished assignment for group at generation 14: 
{connector-consumer-_D275_app_user_-0-a2b3f140-6e99-4e2b-ba89-fadb6286488c=Assignment(partitions=[SERVER275_.xqh-ddtt.app_user2-0])}
 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:702)
   [2025-07-07 16:03:09,188] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Successfully synced group in generation Generation{generationId=14, 
memberId='connector-consumer-_D275_app_user_-0-a2b3f140-6e99-4e2b-ba89-fadb6286488c',
 protocol='range'} 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:789)
   [2025-07-07 16:03:09,189] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Notifying assignor about the new 
Assignment(partitions=[SERVER275_.xqh-ddtt.app_user2-0]) 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:301)
   [2025-07-07 16:03:09,190] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Adding newly assigned partitions: SERVER275_.xqh-ddtt.app_user2-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:313)
   [2025-07-07 16:03:09,192] INFO [_D275_app_user_|task-0] [Consumer 
clientId=connector-consumer-_D275_app_user_-0, groupId=connect-_D275_app_user_] 
Setting offset for partition SERVER275_.xqh-ddtt.app_user2-0 to the committed 
offset FetchPosition{offset=25, offsetEpoch=Optional.empty, 
currentLeader=LeaderAndEpoch{leader=Optional[node2:9092 (id: 1 rack: null)], 
epoch=0}} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:948)
   [2025-07-07 16:03:09,192] INFO [_D275_app_user_|task-0] kafka doris sink 
task open with [SERVER275_.xqh-ddtt.app_user2-0] 
(org.apache.doris.kafka.connector.DorisSinkTask:77)
   [2025-07-07 16:03:09,196] INFO [_D275_app_user_|task-0] JsonConverterConfig 
values: 
        converter.type = value
        decimal.format = NUMERIC
        schemas.cache.size = 1000
        schemas.enable = false
    (org.apache.kafka.connect.json.JsonConverterConfig:376)
   [2025-07-07 16:03:09,213] INFO [_D275_app_user_|task-0] The app_user table 
type is unique model, the two phase commit default value should be disabled. 
(org.apache.doris.kafka.connector.writer.StreamLoadWriter:81)
   [2025-07-07 16:03:10,034] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Revoke previously 
assigned partitions SERVER275_.xqh-ddtt-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:332)
   [2025-07-07 16:03:10,035] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Member 
SERVER275_-dbhistory-dd8bdbc6-997f-4d6f-a8a2-1e2f77460a20 sending LeaveGroup 
request to coordinator node1:9092 (id: 2147483647 rack: null) due to the 
consumer is being closed 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1111)
   [2025-07-07 16:03:10,035] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Resetting 
generation and member id due to: consumer pro-actively leaving the group 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1003)
   [2025-07-07 16:03:10,036] INFO [_S275_|task-0] [Consumer 
clientId=SERVER275_-dbhistory, groupId=SERVER275_-dbhistory] Request joining 
group due to: consumer pro-actively leaving the group 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1050)
   [2025-07-07 16:03:10,037] INFO [_S275_|task-0] Metrics scheduler closed 
(org.apache.kafka.common.metrics.Metrics:659)
   [2025-07-07 16:03:10,038] INFO [_S275_|task-0] Closing reporter 
org.apache.kafka.common.metrics.JmxReporter 
(org.apache.kafka.common.metrics.Metrics:663)
   [2025-07-07 16:03:10,038] INFO [_S275_|task-0] Metrics reporters closed 
(org.apache.kafka.common.metrics.Metrics:669)
   [2025-07-07 16:03:10,041] INFO [_S275_|task-0] App info kafka.consumer for 
SERVER275_-dbhistory unregistered 
(org.apache.kafka.common.utils.AppInfoParser:83)
   [2025-07-07 16:03:10,041] INFO [_S275_|task-0] Finished database history 
recovery of 284 change(s) in 891 ms 
(io.debezium.relational.history.DatabaseHistoryMetrics:114)
   [2025-07-07 16:03:10,043] WARN [_S275_|task-0] The Kafka Connect schema name 
'SERVER275_.xqh-ddtt.app_user2.Value' is not a valid Avro schema name, so 
replacing with 'SERVER275_.xqh_ddtt.app_user2.Value' 
(io.debezium.util.SchemaNameAdjuster:172)
   [2025-07-07 16:03:10,043] WARN [_S275_|task-0] The Kafka Connect schema name 
'SERVER275_.xqh-ddtt.app_user2.Key' is not a valid Avro schema name, so 
replacing with 'SERVER275_.xqh_ddtt.app_user2.Key' 
(io.debezium.util.SchemaNameAdjuster:172)
   [2025-07-07 16:03:10,060] WARN [_S275_|task-0] The Kafka Connect schema name 
'SERVER275_.xqh-ddtt.app_user2.Envelope' is not a valid Avro schema name, so 
replacing with 'SERVER275_.xqh_ddtt.app_user2.Envelope' 
(io.debezium.util.SchemaNameAdjuster:172)
   [2025-07-07 16:03:10,061] INFO [_S275_|task-0] Reconnecting after finishing 
schema recovery (io.debezium.connector.mysql.MySqlConnectorTask:109)
   [2025-07-07 16:03:10,069] INFO [_S275_|task-0] Get all known binlogs from 
MySQL (io.debezium.connector.mysql.MySqlConnection:397)
   [2025-07-07 16:03:10,071] INFO [_S275_|task-0] MySQL has the binlog file 
'mysql-bin.000242' required by the connector 
(io.debezium.connector.mysql.MySqlConnectorTask:317)
   [2025-07-07 16:03:10,072] INFO [_S275_|task-0] Requested thread factory for 
connector MySqlConnector, id = SERVER275_ named = 
change-event-source-coordinator (io.debezium.util.Threads:270)
   [2025-07-07 16:03:10,072] INFO [_S275_|task-0] Creating thread 
debezium-mysqlconnector-SERVER275_-change-event-source-coordinator 
(io.debezium.util.Threads:287)
   [2025-07-07 16:03:10,072] INFO [_S275_|task-0] WorkerSourceTask{id=_S275_-0} 
Source task finished initialization and start 
(org.apache.kafka.connect.runtime.WorkerSourceTask:227)
   [2025-07-07 16:03:10,073] INFO [_S275_|task-0] WorkerSourceTask{id=_S275_-0} 
Executing source task (org.apache.kafka.connect.runtime.WorkerSourceTask:233)
   [2025-07-07 16:03:10,073] INFO [_S275_|task-0] Metrics registered 
(io.debezium.pipeline.ChangeEventSourceCoordinator:103)
   [2025-07-07 16:03:10,073] INFO [_S275_|task-0] Context created 
(io.debezium.pipeline.ChangeEventSourceCoordinator:106)
   [2025-07-07 16:03:10,073] INFO [_S275_|task-0] A previous offset indicating 
a completed snapshot has been found. Neither schema nor data will be 
snapshotted. (io.debezium.connector.mysql.MySqlSnapshotChangeEventSource:81)
   [2025-07-07 16:03:10,074] INFO [_S275_|task-0] Snapshot ended with 
SnapshotResult [status=SKIPPED, offset=MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406585, 
currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406585, restartRowsToSkip=1, restartEventsToSkip=2, 
currentEventLengthInBytes=0, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]]] (io.debezium.pipeline.ChangeEventSource
 Coordinator:156)
   [2025-07-07 16:03:10,074] INFO [_S275_|task-0] Requested thread factory for 
connector MySqlConnector, id = SERVER275_ named = binlog-client 
(io.debezium.util.Threads:270)
   [2025-07-07 16:03:10,074] INFO [_S275_|task-0] Starting streaming 
(io.debezium.pipeline.ChangeEventSourceCoordinator:173)
   [2025-07-07 16:03:10,081] INFO [_S275_|task-0] Skip 2 events on streaming 
start (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:923)
   [2025-07-07 16:03:10,082] INFO [_S275_|task-0] Skip 1 rows on streaming 
start (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:927)
   [2025-07-07 16:03:10,082] INFO [_S275_|task-0] Creating thread 
debezium-mysqlconnector-SERVER275_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:10,084] INFO [_S275_|task-0] Creating thread 
debezium-mysqlconnector-SERVER275_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:10,092] INFO [_S275_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406585, 
currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406585, restartRowsToSkip=1, restartEventsToSkip=2, 
currentEventLengthInBytes=0, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql.MySqlStre
 amingChangeEventSource:1220)
   [2025-07-07 16:03:10,093] INFO [_S275_|task-0] Waiting for keepalive thread 
to start (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:944)
   [2025-07-07 16:03:10,093] INFO [_S275_|task-0] Creating thread 
debezium-mysqlconnector-SERVER275_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:10,193] INFO [_S275_|task-0] Keepalive thread is running 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:951)
   [2025-07-07 16:03:10,222] INFO [_D275_app_user_|task-0] Read 1 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:14,322] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:14,323] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:19,072] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875369069 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:03:19,073] INFO [_D101_all_type_table_|task-0] SR sink flush 
currentBufferBytes (} and SecsSinceLastFlushTime 0 
(com.starrocks.connector.kafka.StarRocksSinkTask:347)
   [2025-07-07 16:03:19,073] INFO [_D101_all_type_table_|task-0] Stream load 
manager flush (com.starrocks.data.load.stream.v2.StreamLoadManagerV2:350)
   [2025-07-07 16:03:19,073] INFO [_D101_all_type_table_|task-0] All regions 
committed for savepoint, number of regions: 0 
(com.starrocks.data.load.stream.v2.StreamLoadManagerV2:202)
   [2025-07-07 16:03:19,165] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:19,165] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:24,323] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:24,324] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:29,074] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875399073 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:03:29,166] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:29,167] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:29,174] INFO [_S248_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, file=mysql-bin.000242, 
pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,174] INFO [_DIS_S185_|task-0] Stopped reading binlog 
after 0 events, last recorded offset: {transaction_id=null, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,174] INFO [_DIS_S185_|task-0] Creating thread 
debezium-mysqlconnector-DIS_SERVER185_-binlog-client 
(io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,174] INFO [_S248_|task-0] Creating thread 
debezium-mysqlconnector-SERVER248_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,185] INFO [_S165_|task-0] Creating thread 
debezium-mysqlconnector-SERVER165_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,186] INFO [_S165_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, file=mysql-bin.000242, 
pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,186] INFO [_S248_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql.MyS
 qlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:29,187] INFO [_DIS_S185_|task-0] Connected to MySQL binlog 
at 192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql
 .MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:29,194] INFO [_S165_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql.MyS
 qlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:29,259] INFO [_DIS_S194_|task-0] Creating thread 
debezium-mysqlconnector-DIS_SERVER194_-binlog-client 
(io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,259] INFO [_DIS_S194_|task-0] Stopped reading binlog 
after 0 events, last recorded offset: {transaction_id=null, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,269] INFO [_DIS_S194_|task-0] Connected to MySQL binlog 
at 192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql
 .MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:29,398] INFO [_S212_|task-0] Creating thread 
debezium-mysqlconnector-SERVER212_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,398] INFO [_S212_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751855174, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,408] INFO [_S212_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:26:14Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.app_user2], 
databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]
 ] (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:29,653] INFO [_S240_|task-0] Creating thread 
debezium-mysqlconnector-SERVER240_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,653] INFO [_S240_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, file=mysql-bin.000242, 
pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,663] INFO [_S240_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql.MyS
 qlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:29,828] INFO [_S265_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751855174, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:29,829] INFO [_S265_|task-0] Creating thread 
debezium-mysqlconnector-SERVER265_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:29,841] INFO [_S265_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:26:14Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.app_user2], 
databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]
 ] (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:30,141] INFO [_S272_|task-0] Creating thread 
debezium-mysqlconnector-SERVER272_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:30,141] INFO [_S272_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751855174, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:30,141] INFO [_S263_|task-0] Creating thread 
debezium-mysqlconnector-SERVER263_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:30,141] INFO [_S263_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751855174, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:30,154] INFO [_S272_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:26:14Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.app_user2], 
databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]
 ] (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:30,154] INFO [_S263_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:26:14Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.app_user2], 
databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]
 ] (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:30,307] INFO [_S264_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751855174, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:30,307] INFO [_S264_|task-0] Creating thread 
debezium-mysqlconnector-SERVER264_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:03:30,318] INFO [_S264_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:26:14Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.app_user2], 
databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]
 ] (io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:31,971] INFO [_DIS_S142_|task-0] Creating thread 
debezium-mysqlconnector-DIS_SERVER142_-binlog-client 
(io.debezium.util.Threads:287)
   [2025-07-07 16:03:31,971] INFO [_DIS_S142_|task-0] Stopped reading binlog 
after 0 events, last recorded offset: {transaction_id=null, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:31,982] INFO [_DIS_S142_|task-0] Connected to MySQL binlog 
at 192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql
 .MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:32,470] INFO [_DIS_S157_|task-0] Stopped reading binlog 
after 0 events, last recorded offset: {transaction_id=null, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:03:32,471] INFO [_DIS_S157_|task-0] Creating thread 
debezium-mysqlconnector-DIS_SERVER157_-binlog-client 
(io.debezium.util.Threads:287)
   [2025-07-07 16:03:32,482] INFO [_DIS_S157_|task-0] Connected to MySQL binlog 
at 192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql
 .MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:03:34,325] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:34,325] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:39,075] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875399073 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:03:39,168] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:39,169] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:44,326] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:44,326] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:49,076] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875399073 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:03:49,076] INFO [_D101_all_type_table_|task-0] SR sink flush 
currentBufferBytes (} and SecsSinceLastFlushTime 0 
(com.starrocks.connector.kafka.StarRocksSinkTask:347)
   [2025-07-07 16:03:49,076] INFO [_D101_all_type_table_|task-0] Stream load 
manager flush (com.starrocks.data.load.stream.v2.StreamLoadManagerV2:350)
   [2025-07-07 16:03:49,077] INFO [_D101_all_type_table_|task-0] All regions 
committed for savepoint, number of regions: 0 
(com.starrocks.data.load.stream.v2.StreamLoadManagerV2:202)
   [2025-07-07 16:03:49,169] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:49,169] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:54,327] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:54,328] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:03:59,078] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875429077 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:03:59,171] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:03:59,171] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:00,183] INFO [_S151_|task-0] Creating thread 
debezium-mysqlconnector-SERVER151_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:04:00,183] INFO [_S151_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751857157, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:04:00,191] INFO [_S151_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:59:17.683378Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.wk_crm_leads_data], 
databaseName=xqh-ddtt], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=nul
 l, maximumKey=null]] 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:04:04,329] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:04,329] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:09,078] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875429077 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:04:09,173] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:09,173] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:14,332] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:14,332] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:19,079] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875429077 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:04:19,080] INFO [_D101_all_type_table_|task-0] SR sink flush 
currentBufferBytes (} and SecsSinceLastFlushTime 0 
(com.starrocks.connector.kafka.StarRocksSinkTask:347)
   [2025-07-07 16:04:19,080] INFO [_D101_all_type_table_|task-0] Stream load 
manager flush (com.starrocks.data.load.stream.v2.StreamLoadManagerV2:350)
   [2025-07-07 16:04:19,080] INFO [_D101_all_type_table_|task-0] All regions 
committed for savepoint, number of regions: 0 
(com.starrocks.data.load.stream.v2.StreamLoadManagerV2:202)
   [2025-07-07 16:04:19,175] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:19,176] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:24,332] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:24,333] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:25,370] INFO [_S70_|task-0] Creating thread 
debezium-mysqlconnector-SERVER70_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:04:25,370] INFO [_S70_|task-0] Stopped reading binlog after 0 
events, last recorded offset: {transaction_id=null, ts_sec=1751857079, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:04:25,380] INFO [_S70_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:57:59.828010Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.wk_crm_leads_data], 
databaseName=xqh-ddtt], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null
 , maximumKey=null]] 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:04:25,428] INFO [_S101_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, ts_sec=1751857079, 
file=mysql-bin.000242, pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:04:25,429] INFO [_S101_|task-0] Creating thread 
debezium-mysqlconnector-SERVER101_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:04:25,439] INFO [_S101_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406855, 
currentRowNumber=0, serverId=223344, sourceTime=2025-07-07T02:57:59.895593Z, 
threadId=-1, currentQuery=null, tableIds=[xqh-ddtt.wk_crm_leads_data], 
databaseName=xqh-ddtt], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=39, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=nul
 l, maximumKey=null]] 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1220)
   [2025-07-07 16:04:29,082] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875459080 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:04:29,177] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:29,177] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:34,335] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:34,335] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:39,084] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875459080 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:04:39,177] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:39,178] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:44,336] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:44,337] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:49,085] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875459080 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:04:49,085] INFO [_D101_all_type_table_|task-0] SR sink flush 
currentBufferBytes (} and SecsSinceLastFlushTime 0 
(com.starrocks.connector.kafka.StarRocksSinkTask:347)
   [2025-07-07 16:04:49,085] INFO [_D101_all_type_table_|task-0] Stream load 
manager flush (com.starrocks.data.load.stream.v2.StreamLoadManagerV2:350)
   [2025-07-07 16:04:49,086] INFO [_D101_all_type_table_|task-0] All regions 
committed for savepoint, number of regions: 0 
(com.starrocks.data.load.stream.v2.StreamLoadManagerV2:202)
   [2025-07-07 16:04:49,179] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:49,179] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:54,337] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:54,338] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:04:59,087] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875489086 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:04:59,181] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:04:59,181] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:05:04,339] INFO [_D272_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:05:04,340] INFO [_D272_app_user_|task-0] Returning committed 
offsets {SERVER272_.xqh-ddtt.app_user2-0=OffsetAndMetadata{offset=69, 
leaderEpoch=null, metadata=''}} 
(org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:05:09,088] INFO [_D101_all_type_table_|task-0] receive 
preCommit currentBufferBytes 0 lastFlushTime 1751875489086 
(com.starrocks.connector.kafka.StarRocksSinkTask:339)
   [2025-07-07 16:05:09,182] INFO [_D275_app_user_|task-0] Read 0 records from 
Kafka (org.apache.doris.kafka.connector.DorisSinkTask:101)
   [2025-07-07 16:05:09,183] INFO [_D275_app_user_|task-0] Returning committed 
offsets {} (org.apache.doris.kafka.connector.DorisSinkTask:152)
   [2025-07-07 16:05:10,095] INFO [_S275_|task-0] Stopped reading binlog after 
0 events, last recorded offset: {transaction_id=null, file=mysql-bin.000242, 
pos=995406894, server_id=223344, event=1} 
(io.debezium.connector.mysql.MySqlStreamingChangeEventSource:1205)
   [2025-07-07 16:05:10,095] INFO [_S275_|task-0] Creating thread 
debezium-mysqlconnector-SERVER275_-binlog-client (io.debezium.util.Threads:287)
   [2025-07-07 16:05:10,106] INFO [_S275_|task-0] Connected to MySQL binlog at 
192.168.1.101:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, 
currentBinlogFilename=mysql-bin.000242, currentBinlogPosition=995406863, 
currentRowNumber=0, serverId=223344, sourceTime=null, threadId=-1, 
currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, 
transactionContext=TransactionContext [currentTransactionId=null, 
perTableEventCount={}, totalEventCount=0], restartGtidSet=null, 
currentGtidSet=null, restartBinlogFilename=mysql-bin.000242, 
restartBinlogPosition=995406894, restartRowsToSkip=0, restartEventsToSkip=1, 
currentEventLengthInBytes=31, inTransaction=false, transactionId=null, 
incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, 
chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, 
maximumKey=null]] (io.debezium.connector.mysql.MyS
 qlStreamingChangeEventSource:1220)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to