[jira] [Resolved] (KAFKA-15274) support moving files to be deleted to other directories

2023-08-03 Thread jianbin.chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianbin.chen resolved KAFKA-15274.
--
Resolution: Duplicate

> support moving files to be deleted to other directories
> ---
>
> Key: KAFKA-15274
> URL: https://issues.apache.org/jira/browse/KAFKA-15274
> Project: Kafka
>  Issue Type: Task
>Reporter: jianbin.chen
>Assignee: jianbin.chen
>Priority: Major
>
> Hello everyone, I am a Kafka user from China. Our company operates in public 
> clouds overseas, such as AWS, Ali Cloud, and Huawei Cloud. We face a large 
> amount of data exchange and business message delivery every day. Daily 
> messages consume a significant amount of disk space. Purchasing the 
> corresponding storage capacity on these cloud providers incurs substantial 
> costs, especially for SSDs with ultra-high IOPS. High IOPS is very effective 
> for disaster recovery, especially in the event of a sudden broker failure 
> where storage space becomes full or memory space is exhausted leading to OOM 
> kills. This high IOPS storage greatly improves data recovery efficiency, 
> forcing us to adopt smaller storage specifications with high IO to save 
> costs. Particularly, cloud providers only allow capacity expansion but not 
> reduction.
> Currently, we have come up with a solution and would like to contribute it to 
> the community for discussion. When we need to delete logs, I can purchase S3 
> or Minio storage from services like AWS and mount it to my brokers. When a 
> log needs to be deleted, we can decide how it leaves the broker. The default 
> is to delete it directly, while the move option moves it to S3. Since most of 
> the deleted data is cold data that won't be used in the short term, this 
> approach improves the retention period of historical data while maintaining 
> good cost control.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15274) support moving files to be deleted to other directories

2023-07-31 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-15274:


 Summary: support moving files to be deleted to other directories
 Key: KAFKA-15274
 URL: https://issues.apache.org/jira/browse/KAFKA-15274
 Project: Kafka
  Issue Type: Improvement
Reporter: jianbin.chen


Hello everyone, I am a Kafka user from China. Our company operates in public 
clouds overseas, such as AWS, Ali Cloud, and Huawei Cloud. We face a large 
amount of data exchange and business message delivery every day. Daily messages 
consume a significant amount of disk space. Purchasing the corresponding 
storage capacity on these cloud providers incurs substantial costs, especially 
for SSDs with ultra-high IOPS. High IOPS is very effective for disaster 
recovery, especially in the event of a sudden broker failure where storage 
space becomes full or memory space is exhausted leading to OOM kills. This high 
IOPS storage greatly improves data recovery efficiency, forcing us to adopt 
smaller storage specifications with high IO to save costs. Particularly, cloud 
providers only allow capacity expansion but not reduction.

Currently, we have come up with a solution and would like to contribute it to 
the community for discussion. When we need to delete logs, I can purchase S3 or 
Minio storage from services like AWS and mount it to my brokers. When a log 
needs to be deleted, we can decide how it leaves the broker. The default is to 
delete it directly, while the move option moves it to S3. Since most of the 
deleted data is cold data that won't be used in the short term, this approach 
improves the retention period of historical data while maintaining good cost 
control.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15264) Compared with 1.1.0zk, the peak throughput of 3.5.1kraft is very jitter

2023-07-27 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-15264:


 Summary: Compared with 1.1.0zk, the peak throughput of 3.5.1kraft 
is very jitter
 Key: KAFKA-15264
 URL: https://issues.apache.org/jira/browse/KAFKA-15264
 Project: Kafka
  Issue Type: Bug
Reporter: jianbin.chen


I was preparing to upgrade from 1.1.0 to 3.5.1's kraft mode (new cluster 
deployment), and when I recently compared and tested, I found that when using 
the following stress test command, the throughput gap is obvious

 
{code:java}
./kafka-producer-perf-test.sh --topic test321 --num-records 3000 
--record-size 1024 --throughput -1 --producer-props bootstrap.servers=xxx: 
acks=1
419813 records sent, 83962.6 records/sec (81.99 MB/sec), 241.1 ms avg latency, 
588.0 ms max latency.
555300 records sent, 111015.6 records/sec (108.41 MB/sec), 275.1 ms avg 
latency, 460.0 ms max latency.
552795 records sent, 110536.9 records/sec (107.95 MB/sec), 265.9 ms avg 
latency, 1120.0 ms max latency.
552600 records sent, 110520.0 records/sec (107.93 MB/sec), 284.5 ms avg 
latency, 1097.0 ms max latency.
538500 records sent, 107656.9 records/sec (105.13 MB/sec), 277.5 ms avg 
latency, 610.0 ms max latency.
511545 records sent, 102309.0 records/sec (99.91 MB/sec), 304.1 ms avg latency, 
1892.0 ms max latency.
511890 records sent, 102337.1 records/sec (99.94 MB/sec), 288.4 ms avg latency, 
3000.0 ms max latency.
519165 records sent, 103812.2 records/sec (101.38 MB/sec), 262.1 ms avg 
latency, 1781.0 ms max latency.
513555 records sent, 102669.9 records/sec (100.26 MB/sec), 338.2 ms avg 
latency, 2590.0 ms max latency.
463329 records sent, 92665.8 records/sec (90.49 MB/sec), 276.8 ms avg latency, 
1463.0 ms max latency.
494248 records sent, 98849.6 records/sec (96.53 MB/sec), 327.2 ms avg latency, 
2362.0 ms max latency.
506272 records sent, 101254.4 records/sec (98.88 MB/sec), 322.1 ms avg latency, 
2986.0 ms max latency.
393758 records sent, 78735.9 records/sec (76.89 MB/sec), 387.0 ms avg latency, 
2958.0 ms max latency.
426435 records sent, 85252.9 records/sec (83.25 MB/sec), 363.3 ms avg latency, 
1959.0 ms max latency.
412560 records sent, 82298.0 records/sec (80.37 MB/sec), 374.1 ms avg latency, 
1995.0 ms max latency.
370137 records sent, 73997.8 records/sec (72.26 MB/sec), 396.8 ms avg latency, 
1496.0 ms max latency.
391781 records sent, 78340.5 records/sec (76.50 MB/sec), 410.7 ms avg latency, 
2446.0 ms max latency.
355901 records sent, 71166.0 records/sec (69.50 MB/sec), 397.5 ms avg latency, 
2715.0 ms max latency.
385410 records sent, 77082.0 records/sec (75.28 MB/sec), 417.5 ms avg latency, 
2702.0 ms max latency.
381160 records sent, 76232.0 records/sec (74.45 MB/sec), 407.7 ms avg latency, 
1846.0 ms max latency.
67 records sent, 0.1 records/sec (65.10 MB/sec), 456.2 ms avg latency, 
1414.0 ms max latency.
376251 records sent, 75175.0 records/sec (73.41 MB/sec), 401.9 ms avg latency, 
1897.0 ms max latency.
354434 records sent, 70886.8 records/sec (69.23 MB/sec), 425.8 ms avg latency, 
1601.0 ms max latency.
353795 records sent, 70744.9 records/sec (69.09 MB/sec), 411.7 ms avg latency, 
1563.0 ms max latency.
321993 records sent, 64360.0 records/sec (62.85 MB/sec), 447.3 ms avg latency, 
1975.0 ms max latency.
404075 records sent, 80750.4 records/sec (78.86 MB/sec), 408.4 ms avg latency, 
1753.0 ms max latency.
384526 records sent, 76905.2 records/sec (75.10 MB/sec), 406.0 ms avg latency, 
1833.0 ms max latency.
387652 records sent, 77483.9 records/sec (75.67 MB/sec), 397.3 ms avg latency, 
1927.0 ms max latency.
343286 records sent, 68629.7 records/sec (67.02 MB/sec), 455.6 ms avg latency, 
1685.0 ms max latency.
00 records sent, 66646.7 records/sec (65.08 MB/sec), 456.6 ms avg latency, 
2146.0 ms max latency.
361191 records sent, 72238.2 records/sec (70.55 MB/sec), 409.4 ms avg latency, 
2125.0 ms max latency.
357525 records sent, 71490.7 records/sec (69.82 MB/sec), 436.0 ms avg latency, 
1502.0 ms max latency.
340238 records sent, 68047.6 records/sec (66.45 MB/sec), 427.9 ms avg latency, 
1932.0 ms max latency.
390016 records sent, 77956.4 records/sec (76.13 MB/sec), 418.5 ms avg latency, 
1807.0 ms max latency.
352830 records sent, 70523.7 records/sec (68.87 MB/sec), 439.4 ms avg latency, 
1892.0 ms max latency.
354526 records sent, 70905.2 records/sec (69.24 MB/sec), 429.6 ms avg latency, 
2128.0 ms max latency.
356670 records sent, 71305.5 records/sec (69.63 MB/sec), 408.9 ms avg latency, 
1329.0 ms max latency.
309204 records sent, 60687.7 records/sec (59.27 MB/sec), 438.6 ms avg latency, 
2566.0 ms max latency.
366715 records sent, 72316.1 records/sec (70.62 MB/sec), 474.5 ms avg latency, 
2169.0 ms max latency.
375174 records sent, 75034.8 records/sec (73.28 MB/sec), 429.9 ms avg latency, 
1722.0 ms max latency.
359400 records sent, 70346.4 records/sec (68.70 MB/sec), 432.1 ms avg latency, 
1961.0 ms max latency.
312276 

[jira] [Created] (KAFKA-14465) java.lang.NumberFormatException: For input string: "index"

2022-12-12 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-14465:


 Summary: java.lang.NumberFormatException: For input string: 
"index" 
 Key: KAFKA-14465
 URL: https://issues.apache.org/jira/browse/KAFKA-14465
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: jianbin.chen


{code:java}
[2022-12-13 07:12:20,369] WARN [Log partition=fp_sg_flow_copy-1, 
dir=/home/admin/output/kafka-logs] Found a corrupted index file corresponding 
to log file /home/admin/output/kafk
a-logs/fp_sg_flow_copy-1/0165.log due to Corrupt index found, 
index file 
(/home/admin/output/kafka-logs/fp_sg_flow_copy-1/0165.index) 
has non-zero
 size but the last offset is 165 which is no greater than the base offset 
165.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2022-12-13 07:12:20,369] ERROR There was an error in one of the threads during 
logs loading: java.lang.NumberFormatException: For input string: "index" 
(kafka.log.LogManager)
[2022-12-13 07:12:20,374] INFO [ProducerStateManager 
partition=fp_sg_flow_copy-1] Writing producer snapshot at offset 165 
(kafka.log.ProducerStateManager)
[2022-12-13 07:12:20,378] INFO [Log partition=fp_sg_flow_copy-1, 
dir=/home/admin/output/kafka-logs] Loading producer state from offset 165 with 
message format version 2 (kafka.lo
g.Log)
[2022-12-13 07:12:20,381] INFO [Log partition=fp_sg_flow_copy-1, 
dir=/home/admin/output/kafka-logs] Completed load of log with 1 segments, log 
start offset 165 and log end offset
 165 in 13 ms (kafka.log.Log)
[2022-12-13 07:12:20,389] ERROR [KafkaServer id=1] Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.NumberFormatException: For input string: "index"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at scala.collection.immutable.StringLike.toLong(StringLike.scala:306)
at scala.collection.immutable.StringLike.toLong$(StringLike.scala:306)
at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)
at kafka.log.Log$.offsetFromFile(Log.scala:1846)
at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:331)
at 
scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:191)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
at kafka.log.Log.loadSegmentFiles(Log.scala:320)
at kafka.log.Log.loadSegments(Log.scala:403)
at kafka.log.Log.(Log.scala:216)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2022-12-13 07:12:20,401] INFO [KafkaServer id=1] shutting down 
(kafka.server.KafkaServer)


{code}
When I restart the broker, it becomes like this, I deleted the 
000165.index file, after starting it, there are still other 
files with the same error, please tell me how to fix it and what it is causing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14434) Why is this project not maintained anymore?

2022-12-01 Thread jianbin.chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianbin.chen resolved KAFKA-14434.
--
Resolution: Invalid

invalid

> Why is this project not maintained anymore?
> ---
>
> Key: KAFKA-14434
> URL: https://issues.apache.org/jira/browse/KAFKA-14434
> Project: Kafka
>  Issue Type: Improvement
>Reporter: jianbin.chen
>Priority: Major
>
> Why is this project not maintained anymore? Can I continue to use it and 
> submit pr?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14434) Why is this project not maintained anymore?

2022-12-01 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-14434:


 Summary: Why is this project not maintained anymore?
 Key: KAFKA-14434
 URL: https://issues.apache.org/jira/browse/KAFKA-14434
 Project: Kafka
  Issue Type: Improvement
Reporter: jianbin.chen


Why is this project not maintained anymore? Can I continue to use it and submit 
pr?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14430) optimize: -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT

2022-11-30 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-14430:


 Summary: optimize: 
-Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT
 Key: KAFKA-14430
 URL: https://issues.apache.org/jira/browse/KAFKA-14430
 Project: Kafka
  Issue Type: Improvement
Reporter: jianbin.chen


In case the server has a firewall, exposing only 
'com.sun.management.jmxremote.port' cannot fetch metrics, and when the rmi port 
is not specified, it is randomly generated by default, should make the two 
ports consistent for metrics data reading

[https://bugs.openjdk.org/browse/JDK-8035404?page=com.atlassian.jira.plugin.system.issuetabpanels%3Achangehistory-tabpanel]
[https://www.baeldung.com/jmx-ports]

_Summary of testing strategy (including rationale)_
_for the feature or bug fix. Unit and/or integration_
_tests are expected for any behaviour change and_
_system tests should be considered for larger changes._

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14257) Unexpected error INCONSISTENT_CLUSTER_ID in VOTE response

2022-09-28 Thread jianbin.chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianbin.chen resolved KAFKA-14257.
--
Resolution: Done

> Unexpected error INCONSISTENT_CLUSTER_ID in VOTE response
> -
>
> Key: KAFKA-14257
> URL: https://issues.apache.org/jira/browse/KAFKA-14257
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.2.3
>Reporter: jianbin.chen
>Priority: Major
>
> Please help me see why the error message is output indefinitely
> broker1:
> {code:java}
> process.roles=broker,controller
> listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> node.id=1
> listeners=PLAINTEXT://192.168.6.57:9092,CONTROLLER://192.168.6.57:9093
> inter.broker.listener.name=PLAINTEXT
> advertised.listeners=PLAINTEXT://192.168.6.57:9092
> controller.listener.names=CONTROLLER
> num.io.threads=8
> num.network.threads=5
> controller.quorum.voters=1@192.168.6.57:9093,2@192.168.6.56:9093,3@192.168.6.55:9093
> log.dirs=/data01/kafka323-logs{code}
> broker2
> {code:java}
> process.roles=broker,controller
> controller.listener.names=CONTROLLER
> num.io.threads=8
> num.network.threads=5
> listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> node.id=2
> listeners=PLAINTEXT://192.168.6.56:9092,CONTROLLER://192.168.6.56:9093
> inter.broker.listener.name=PLAINTEXT
> controller.quorum.voters=1@192.168.6.57:9093,2@192.168.6.56:9093,3@192.168.6.55:9093
> log.dirs=/data01/kafka323-logs{code}
> broker3
> {code:java}
> process.roles=broker,controller
> controller.listener.names=CONTROLLER
> num.io.threads=8
> num.network.threads=5
> node.id=3
> listeners=PLAINTEXT://192.168.6.55:9092,CONTROLLER://192.168.6.55:9093
> inter.broker.listener.name=PLAINTEXT
> controller.quorum.voters=1@192.168.6.57:9093,2@192.168.6.56:9093,3@192.168.6.55:9093
> log.dirs=/data01/kafka323-logs
> {code}
> error msg:
> {code:java}
> [2022-09-22 18:44:01,601] ERROR [RaftManager nodeId=2] Unexpected error 
> INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=378, 
> data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
> (org.apache.kafka.raft.KafkaRaftClient)
> [2022-09-22 18:44:01,625] ERROR [RaftManager nodeId=2] Unexpected error 
> INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=380, 
> data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
> (org.apache.kafka.raft.KafkaRaftClient)
> [2022-09-22 18:44:01,655] ERROR [RaftManager nodeId=2] Unexpected error 
> INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=382, 
> data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
> (org.apache.kafka.raft.KafkaRaftClient)
> [2022-09-22 18:44:01,679] ERROR [RaftManager nodeId=2] Unexpected error 
> INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=384, 
> data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
> (org.apache.kafka.raft.KafkaRaftClient)
> [2022-09-22 18:44:01,706] ERROR [RaftManager nodeId=2] Unexpected error 
> INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=386, 
> data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
> (org.apache.kafka.raft.KafkaRaftClient)
> [2022-09-22 18:44:01,729] ERROR [RaftManager nodeId=2] Unexpected error 
> INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=388, 
> data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
> (org.apache.kafka.raft.KafkaRaftClient){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14257) Unexpected error INCONSISTENT_CLUSTER_ID in VOTE response

2022-09-22 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-14257:


 Summary: Unexpected error INCONSISTENT_CLUSTER_ID in VOTE response
 Key: KAFKA-14257
 URL: https://issues.apache.org/jira/browse/KAFKA-14257
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.2.3
Reporter: jianbin.chen


Please help me see why the error message is output indefinitely

broker1:
{code:java}
process.roles=broker,controller
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
node.id=1
listeners=PLAINTEXT://192.168.6.57:9092,CONTROLLER://192.168.6.57:9093
inter.broker.listener.name=PLAINTEXT
advertised.listeners=PLAINTEXT://192.168.6.57:9092
controller.listener.names=CONTROLLER
num.io.threads=8
num.network.threads=5
controller.quorum.voters=1@192.168.6.57:9093,2@192.168.6.56:9093,3@192.168.6.55:9093
log.dirs=/data01/kafka323-logs{code}
broker2
{code:java}
process.roles=broker,controller
controller.listener.names=CONTROLLER
num.io.threads=8
num.network.threads=5
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
node.id=2
listeners=PLAINTEXT://192.168.6.56:9092,CONTROLLER://192.168.6.56:9093
inter.broker.listener.name=PLAINTEXT
controller.quorum.voters=1@192.168.6.57:9093,2@192.168.6.56:9093,3@192.168.6.55:9093
log.dirs=/data01/kafka323-logs{code}
broker3
{code:java}
process.roles=broker,controller
controller.listener.names=CONTROLLER
num.io.threads=8
num.network.threads=5
node.id=3
listeners=PLAINTEXT://192.168.6.55:9092,CONTROLLER://192.168.6.55:9093
inter.broker.listener.name=PLAINTEXT
controller.quorum.voters=1@192.168.6.57:9093,2@192.168.6.56:9093,3@192.168.6.55:9093
log.dirs=/data01/kafka323-logs

{code}
error msg:
{code:java}
[2022-09-22 18:44:01,601] ERROR [RaftManager nodeId=2] Unexpected error 
INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=378, 
data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
(org.apache.kafka.raft.KafkaRaftClient)
[2022-09-22 18:44:01,625] ERROR [RaftManager nodeId=2] Unexpected error 
INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=380, 
data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
(org.apache.kafka.raft.KafkaRaftClient)
[2022-09-22 18:44:01,655] ERROR [RaftManager nodeId=2] Unexpected error 
INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=382, 
data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
(org.apache.kafka.raft.KafkaRaftClient)
[2022-09-22 18:44:01,679] ERROR [RaftManager nodeId=2] Unexpected error 
INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=384, 
data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
(org.apache.kafka.raft.KafkaRaftClient)
[2022-09-22 18:44:01,706] ERROR [RaftManager nodeId=2] Unexpected error 
INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=386, 
data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
(org.apache.kafka.raft.KafkaRaftClient)
[2022-09-22 18:44:01,729] ERROR [RaftManager nodeId=2] Unexpected error 
INCONSISTENT_CLUSTER_ID in VOTE response: InboundResponse(correlationId=388, 
data=VoteResponseData(errorCode=104, topics=[]), sourceId=1) 
(org.apache.kafka.raft.KafkaRaftClient){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14159) unlimited output: is no longer the Coordinator. Retrying with new coordinator.

2022-08-10 Thread jianbin.chen (Jira)
jianbin.chen created KAFKA-14159:


 Summary: unlimited output: is no longer the Coordinator. Retrying 
with new coordinator.
 Key: KAFKA-14159
 URL: https://issues.apache.org/jira/browse/KAFKA-14159
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 1.1.1
Reporter: jianbin.chen


{code:java}
2022-08-10 18:47:45.546  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.588  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.647  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.4:9092 (id: 864 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.690  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.751  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.5:9092 (id: 865 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.791  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.4:9092 (id: 864 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.852  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.5:9092 (id: 865 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.894  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.5:9092 (id: 865 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.952  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.5:9092 (id: 865 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:45.994  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.23:9092 (id: 5723 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.052  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.5:9092 (id: 865 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.096  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.154  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.25:9092 (id: 5725 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.197  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.259  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.298  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.86.4:9092 (id: 864 rack: null) is no 
longer the Coordinator. Retrying with new coordinator.
2022-08-10 18:47:46.360  INFO 60 --- [kafka-admin-client-thread | 
adminclient-3] o.a.k.clients.admin.KafkaAdminClient : [AdminClient 
clientId=adminclient-3] Node 192.168.57.24:9092 (id: 5724 rack: null) is no 
longer the Coordinator.