[jira] [Commented] (KAFKA-7303) Kafka client is stuck when specifying wrong brokers
[ https://issues.apache.org/jira/browse/KAFKA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16586255#comment-16586255 ] Chun Zhang commented on KAFKA-7303: --- Thanks Manikumar. I will give it a try with 2.0 This bug is duplicated with many other issues inducing KAFKA-1894 and is fixed by KIP-266. > Kafka client is stuck when specifying wrong brokers > --- > > Key: KAFKA-7303 > URL: https://issues.apache.org/jira/browse/KAFKA-7303 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 1.1.0 >Reporter: Chun Zhang >Priority: Major > > {code:java} > import java.util.Collections; > import java.util.Properties; > import org.apache.kafka.clients.CommonClientConfigs; > import org.apache.kafka.clients.consumer.ConsumerConfig; > import org.apache.kafka.clients.consumer.ConsumerRecord; > import org.apache.kafka.clients.consumer.ConsumerRecords; > import org.apache.kafka.clients.consumer.KafkaConsumer; > public class KafkaBug { > public static void main(String[] args) throws Exception { > Properties props = new Properties(); > // intentionally use an irrelevant address > props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, > "issues.apache.org:80"); > props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT"); > props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id_string"); > props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > KafkaConsumer consumer = new KafkaConsumer<>(props); > consumer.subscribe(Collections.singleton("mytopic")); > // This call will block forever. > consumer.poll(1000); > } > } > {code} > When I run the code above, I keep getting the error log below: > {code:java} > DEBUG [main] (21:21:25,959) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection with issues.apache.org/207.244.88.139 > disconnected > java.net.ConnectException: Connection timed out > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) > at > org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106) > at > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:470) > at org.apache.kafka.common.network.Selector.poll(Selector.java:424) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:156) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205) > at > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) > at com.twosigma.example.kafka.bug.KafkaBug.main(KafkaBug.java:46) > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Node -1 disconnected. > WARN [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection to node -1 could not be established. > Broker may not be available. > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > DEBUG [main] (21:21:26,013) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > {code} > I expect the program to fail when the wrong broker is specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-7303) Kafka client is stuck when specifying wrong brokers
[ https://issues.apache.org/jira/browse/KAFKA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chun Zhang resolved KAFKA-7303. --- Resolution: Duplicate > Kafka client is stuck when specifying wrong brokers > --- > > Key: KAFKA-7303 > URL: https://issues.apache.org/jira/browse/KAFKA-7303 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 1.1.0 >Reporter: Chun Zhang >Priority: Major > > {code:java} > import java.util.Collections; > import java.util.Properties; > import org.apache.kafka.clients.CommonClientConfigs; > import org.apache.kafka.clients.consumer.ConsumerConfig; > import org.apache.kafka.clients.consumer.ConsumerRecord; > import org.apache.kafka.clients.consumer.ConsumerRecords; > import org.apache.kafka.clients.consumer.KafkaConsumer; > public class KafkaBug { > public static void main(String[] args) throws Exception { > Properties props = new Properties(); > // intentionally use an irrelevant address > props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, > "issues.apache.org:80"); > props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT"); > props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id_string"); > props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > KafkaConsumer consumer = new KafkaConsumer<>(props); > consumer.subscribe(Collections.singleton("mytopic")); > // This call will block forever. > consumer.poll(1000); > } > } > {code} > When I run the code above, I keep getting the error log below: > {code:java} > DEBUG [main] (21:21:25,959) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection with issues.apache.org/207.244.88.139 > disconnected > java.net.ConnectException: Connection timed out > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) > at > org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106) > at > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:470) > at org.apache.kafka.common.network.Selector.poll(Selector.java:424) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:156) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205) > at > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) > at com.twosigma.example.kafka.bug.KafkaBug.main(KafkaBug.java:46) > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Node -1 disconnected. > WARN [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection to node -1 could not be established. > Broker may not be available. > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > DEBUG [main] (21:21:26,013) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > {code} > I expect the program to fail when the wrong broker is specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7303) Kafka client is stuck when specifying wrong brokers
[ https://issues.apache.org/jira/browse/KAFKA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583932#comment-16583932 ] Chun Zhang commented on KAFKA-7303: --- Thanks. I am actually using Kafka Stream API and the call to poll is not controlled by me. Do you know any workaround to that? > Kafka client is stuck when specifying wrong brokers > --- > > Key: KAFKA-7303 > URL: https://issues.apache.org/jira/browse/KAFKA-7303 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 1.1.0 >Reporter: Chun Zhang >Priority: Major > > {code:java} > import java.util.Collections; > import java.util.Properties; > import org.apache.kafka.clients.CommonClientConfigs; > import org.apache.kafka.clients.consumer.ConsumerConfig; > import org.apache.kafka.clients.consumer.ConsumerRecord; > import org.apache.kafka.clients.consumer.ConsumerRecords; > import org.apache.kafka.clients.consumer.KafkaConsumer; > public class KafkaBug { > public static void main(String[] args) throws Exception { > Properties props = new Properties(); > // intentionally use an irrelevant address > props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, > "issues.apache.org:80"); > props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT"); > props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id_string"); > props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > KafkaConsumer consumer = new KafkaConsumer<>(props); > consumer.subscribe(Collections.singleton("mytopic")); > // This call will block forever. > consumer.poll(1000); > } > } > {code} > When I run the code above, I keep getting the error log below: > {code:java} > DEBUG [main] (21:21:25,959) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection with issues.apache.org/207.244.88.139 > disconnected > java.net.ConnectException: Connection timed out > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) > at > org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106) > at > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:470) > at org.apache.kafka.common.network.Selector.poll(Selector.java:424) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:156) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205) > at > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) > at com.twosigma.example.kafka.bug.KafkaBug.main(KafkaBug.java:46) > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Node -1 disconnected. > WARN [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection to node -1 could not be established. > Broker may not be available. > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > DEBUG [main] (21:21:26,013) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > {code} > I expect the program to fail when the wrong broker is specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7303) Kafka client is stuck when specifying wrong brokers
[ https://issues.apache.org/jira/browse/KAFKA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chun Zhang updated KAFKA-7303: -- Summary: Kafka client is stuck when specifying wrong brokers (was: Kafka client is stuck when specifying wrong brokkers) > Kafka client is stuck when specifying wrong brokers > --- > > Key: KAFKA-7303 > URL: https://issues.apache.org/jira/browse/KAFKA-7303 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 1.1.0 >Reporter: Chun Zhang >Priority: Major > > {code:java} > import java.util.Collections; > import java.util.Properties; > import org.apache.kafka.clients.CommonClientConfigs; > import org.apache.kafka.clients.consumer.ConsumerConfig; > import org.apache.kafka.clients.consumer.ConsumerRecord; > import org.apache.kafka.clients.consumer.ConsumerRecords; > import org.apache.kafka.clients.consumer.KafkaConsumer; > public class KafkaBug { > public static void main(String[] args) throws Exception { > Properties props = new Properties(); > // intentionally use an irrelevant address > props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, > "issues.apache.org:80"); > props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT"); > props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id_string"); > props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArrayDeserializer"); > KafkaConsumer consumer = new KafkaConsumer<>(props); > consumer.subscribe(Collections.singleton("mytopic")); > // This call will block forever. > consumer.poll(1000); > } > } > {code} > When I run the code above, I keep getting the error log below: > {code:java} > DEBUG [main] (21:21:25,959) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection with issues.apache.org/207.244.88.139 > disconnected > java.net.ConnectException: Connection timed out > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) > at > org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106) > at > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:470) > at org.apache.kafka.common.network.Selector.poll(Selector.java:424) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:156) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205) > at > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) > at com.twosigma.example.kafka.bug.KafkaBug.main(KafkaBug.java:46) > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Node -1 disconnected. > WARN [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Connection to node -1 could not be established. > Broker may not be available. > DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > DEBUG [main] (21:21:26,013) - [Consumer clientId=consumer-1, > groupId=group_id_string] Give up sending metadata request since no node is > available > {code} > I expect the program to fail when the wrong broker is specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7303) Kafka client is stuck when specifying wrong brokkers
Chun Zhang created KAFKA-7303: - Summary: Kafka client is stuck when specifying wrong brokkers Key: KAFKA-7303 URL: https://issues.apache.org/jira/browse/KAFKA-7303 Project: Kafka Issue Type: Bug Components: consumer Affects Versions: 1.1.0 Reporter: Chun Zhang {code:java} import java.util.Collections; import java.util.Properties; import org.apache.kafka.clients.CommonClientConfigs; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; public class KafkaBug { public static void main(String[] args) throws Exception { Properties props = new Properties(); // intentionally use an irrelevant address props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, "issues.apache.org:80"); props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT"); props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id_string"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer"); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer"); KafkaConsumer consumer = new KafkaConsumer<>(props); consumer.subscribe(Collections.singleton("mytopic")); // This call will block forever. consumer.poll(1000); } } {code} When I run the code above, I keep getting the error log below: {code:java} DEBUG [main] (21:21:25,959) - [Consumer clientId=consumer-1, groupId=group_id_string] Connection with issues.apache.org/207.244.88.139 disconnected java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:470) at org.apache.kafka.common.network.Selector.poll(Selector.java:424) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:156) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) at com.twosigma.example.kafka.bug.KafkaBug.main(KafkaBug.java:46) DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, groupId=group_id_string] Node -1 disconnected. WARN [main] (21:21:25,963) - [Consumer clientId=consumer-1, groupId=group_id_string] Connection to node -1 could not be established. Broker may not be available. DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, groupId=group_id_string] Give up sending metadata request since no node is available DEBUG [main] (21:21:26,013) - [Consumer clientId=consumer-1, groupId=group_id_string] Give up sending metadata request since no node is available {code} I expect the program to fail when the wrong broker is specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)