[jira] [Updated] (KAFKA-3992) InstanceAlreadyExistsException Error for Consumers Starting in Parallel
[ https://issues.apache.org/jira/browse/KAFKA-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Cook updated KAFKA-3992: -- Affects Version/s: 0.10.0.0 > InstanceAlreadyExistsException Error for Consumers Starting in Parallel > --- > > Key: KAFKA-3992 > URL: https://issues.apache.org/jira/browse/KAFKA-3992 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.9.0.0, 0.10.0.0 >Reporter: Alexander Cook >Assignee: Ewen Cheslack-Postava > > I see the following error sometimes when I start multiple consumers at about > the same time in the same process (separate threads). Everything seems to > work fine afterwards, so should this not actually be an ERROR level message, > or could there be something going wrong that I don't see? > Let me know if I can provide any more info! > Error processing messages: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > org.apache.kafka.common.KafkaException: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > > Caused by: javax.management.InstanceAlreadyExistsException: > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > Here is the full stack trace: > M[?:com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples:-1] - > Error processing messages: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > org.apache.kafka.common.KafkaException: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > at > org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159) > at > org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77) > at > org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288) > at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177) > at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162) > at > org.apache.kafka.common.network.Selector$SelectorMetrics.maybeRegisterConnectionMetrics(Selector.java:641) > at org.apache.kafka.common.network.Selector.poll(Selector.java:268) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:126) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:186) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:857) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829) > at > com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples(KafkaConsumerV9.java:129) > at > com.ibm.streamsx.messaging.kafka.KafkaConsumerV9$1.run(KafkaConsumerV9.java:70) > at java.lang.Thread.run(Thread.java:785) > at > com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137) > Caused by: javax.management.InstanceAlreadyExistsException: > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:449) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1910) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:978) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:912) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:336) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:534) > at > org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157) > ... 18 more -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Reopened] (KAFKA-3992) InstanceAlreadyExistsException Error for Consumers Starting in Parallel
[ https://issues.apache.org/jira/browse/KAFKA-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Cook reopened KAFKA-3992: --- I have moved to 0.10 of the Kafka Consumer Client and I am still seeing this issue. I believe that what is happening is that in cases where we do not provide a client id, Kafka generates one for us. The problem seems to be that the client.id generation in Kafka is not thread-safe, as when I start 3 consumers in parallel (all at the same time), one of them succeeds, but the other two fail with the error in this issue as they get assigned the "consumer-1" id. However: If I use the exact same code, but start a single consumer at a time with a delay in-between, I can successfully consume in parallel. > InstanceAlreadyExistsException Error for Consumers Starting in Parallel > --- > > Key: KAFKA-3992 > URL: https://issues.apache.org/jira/browse/KAFKA-3992 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.9.0.0 >Reporter: Alexander Cook >Assignee: Ewen Cheslack-Postava > > I see the following error sometimes when I start multiple consumers at about > the same time in the same process (separate threads). Everything seems to > work fine afterwards, so should this not actually be an ERROR level message, > or could there be something going wrong that I don't see? > Let me know if I can provide any more info! > Error processing messages: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > org.apache.kafka.common.KafkaException: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > > Caused by: javax.management.InstanceAlreadyExistsException: > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > Here is the full stack trace: > M[?:com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples:-1] - > Error processing messages: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > org.apache.kafka.common.KafkaException: Error registering mbean > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > at > org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159) > at > org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77) > at > org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288) > at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177) > at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162) > at > org.apache.kafka.common.network.Selector$SelectorMetrics.maybeRegisterConnectionMetrics(Selector.java:641) > at org.apache.kafka.common.network.Selector.poll(Selector.java:268) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:126) > at > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:186) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:857) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829) > at > com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples(KafkaConsumerV9.java:129) > at > com.ibm.streamsx.messaging.kafka.KafkaConsumerV9$1.run(KafkaConsumerV9.java:70) > at java.lang.Thread.run(Thread.java:785) > at > com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137) > Caused by: javax.management.InstanceAlreadyExistsException: > kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:449) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1910) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:978) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:912) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:336) >
[jira] [Created] (KAFKA-3992) InstanceAlreadyExistsException Error for Consumers Starting in Parallel
Alexander Cook created KAFKA-3992: - Summary: InstanceAlreadyExistsException Error for Consumers Starting in Parallel Key: KAFKA-3992 URL: https://issues.apache.org/jira/browse/KAFKA-3992 Project: Kafka Issue Type: Bug Affects Versions: 0.9.0.0 Reporter: Alexander Cook I see the following error sometimes when I start multiple consumers at about the same time in the same process (separate threads). Everything seems to work fine afterwards, so should this not actually be an ERROR level message, or could there be something going wrong that I don't see? Let me know if I can provide any more info! Error processing messages: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 org.apache.kafka.common.KafkaException: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 Caused by: javax.management.InstanceAlreadyExistsException: kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 Here is the full stack trace: M[?:com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples:-1] - Error processing messages: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 org.apache.kafka.common.KafkaException: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159) at org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77) at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288) at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177) at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162) at org.apache.kafka.common.network.Selector$SelectorMetrics.maybeRegisterConnectionMetrics(Selector.java:641) at org.apache.kafka.common.network.Selector.poll(Selector.java:268) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:126) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:186) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:857) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829) at com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples(KafkaConsumerV9.java:129) at com.ibm.streamsx.messaging.kafka.KafkaConsumerV9$1.run(KafkaConsumerV9.java:70) at java.lang.Thread.run(Thread.java:785) at com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137) Caused by: javax.management.InstanceAlreadyExistsException: kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:449) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1910) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:978) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:912) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:336) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:534) at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157) ... 18 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected
[ https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332736#comment-15332736 ] Alexander Cook commented on KAFKA-3822: --- I got to try this out today, and you are correct. This only happens when enable.auto.commit=true. max.block.ms would be great. Would that cover consumer.poll as well? > Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while > connected > -- > > Key: KAFKA-3822 > URL: https://issues.apache.org/jira/browse/KAFKA-3822 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1, 0.10.0.0 > Environment: x86 Red Hat 6 (1 broker running zookeeper locally, > client running on a separate server) >Reporter: Alexander Cook > > I am using the KafkaConsumer java client to consume messages. My application > shuts down smoothly if I am connected to a Kafka broker, or if I never > succeed at connecting to a Kafka broker, but if the broker is shut down while > my consumer is connected to it, consumer.close() hangs indefinitely. > Here is how I reproduce it: > 1. Start 0.9.0.1 Kafka Broker > 2. Start consumer application and consume messages > 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script) > 4. Try to stop application...hangs at consumer.close() indefinitely. > I also see this same behavior using 0.10 broker and client. > This is my first bug reported to Kafka, so please let me know if I should be > following a different format. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected
[ https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Cook updated KAFKA-3822: -- Affects Version/s: 0.10.0.0 > Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while > connected > -- > > Key: KAFKA-3822 > URL: https://issues.apache.org/jira/browse/KAFKA-3822 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1, 0.10.0.0 > Environment: x86 Red Hat 6 (1 broker running zookeeper locally, > client running on a separate server) >Reporter: Alexander Cook > > I am using the KafkaConsumer java client to consume messages. My application > shuts down smoothly if I am connected to a Kafka broker, or if I never > succeed at connecting to a Kafka broker, but if the broker is shut down while > my consumer is connected to it, consumer.close() hangs indefinitely. > Here is how I reproduce it: > 1. Start 0.9.0.1 Kafka Broker > 2. Start consumer application and consume messages > 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script) > 4. Try to stop application...hangs at consumer.close() indefinitely. > I also see this same behavior using 0.10 broker and client. > This is my first bug reported to Kafka, so please let me know if I should be > following a different format. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected
[ https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Cook updated KAFKA-3822: -- Description: I am using the KafkaConsumer java client to consume messages. My application shuts down smoothly if I am connected to a Kafka broker, or if I never succeed at connecting to a Kafka broker, but if the broker is shut down while my consumer is connected to it, consumer.close() hangs indefinitely. Here is how I reproduce it: 1. Start 0.9.0.1 Kafka Broker 2. Start consumer application and consume messages 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script) 4. Try to stop application...hangs at consumer.close() indefinitely. I also see this same behavior using 0.10 broker and client. This is my first bug reported to Kafka, so please let me know if I should be following a different format. Thanks! was: I am using the KafkaConsumer java client to consume messages. My application shuts down smoothly if I am connected to a Kafka broker, or if I never succeed at connecting to a Kafka broker, but if the broker is shut down while my consumer is connected to it, consumer.close() hangs indefinitely. Here is how I reproduce it: 1. Start 0.9.0.1 Kafka Broker 2. Start consumer application and consume messages 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script) 4. Try to stop application...hangs at consumer.close() indefinitely. I am going to try this out on 0.10 to see if the same thing happens. This is my first bug reported to Kafka, so please let me know if I should be following a different format. Thanks! > Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while > connected > -- > > Key: KAFKA-3822 > URL: https://issues.apache.org/jira/browse/KAFKA-3822 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: x86 Red Hat 6 (1 broker running zookeeper locally, > client running on a separate server) >Reporter: Alexander Cook > > I am using the KafkaConsumer java client to consume messages. My application > shuts down smoothly if I am connected to a Kafka broker, or if I never > succeed at connecting to a Kafka broker, but if the broker is shut down while > my consumer is connected to it, consumer.close() hangs indefinitely. > Here is how I reproduce it: > 1. Start 0.9.0.1 Kafka Broker > 2. Start consumer application and consume messages > 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script) > 4. Try to stop application...hangs at consumer.close() indefinitely. > I also see this same behavior using 0.10 broker and client. > This is my first bug reported to Kafka, so please let me know if I should be > following a different format. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)