*I have deployed kafka in kubernetes using https://github.com/Yolean/kubernetes-kafka. But while consuming using kafka consumer, I get following error : * SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties [10:23:00] __________ ________________ [10:23:00] / _/ ___/ |/ / _/_ __/ __/ [10:23:00] _/ // (7 7 // / / / / _/ [10:23:00] /___/\___/_/|_/___/ /_/ /___/ [10:23:00] [10:23:00] ver. 1.9.0#20170302-sha1:a8169d0a [10:23:00] 2017 Copyright(C) Apache Software Foundation [10:23:00] [10:23:00] Ignite documentation: http://ignite.apache.org [10:23:00] [10:23:00] Quiet mode. [10:23:00] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat} [10:23:00] [10:23:00] OS: Linux 3.10.0-862.11.6.el7.x86_64 amd64 [10:23:00] VM information: OpenJDK Runtime Environment 1.8.0_181-8u181-b13-1~deb9u1-b13 Oracle Corporation OpenJDK 64-Bit Server VM 25.181-b13 [10:23:02] Configured plugins: [10:23:02] ^-- None [10:23:02] [10:23:02] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides. [10:23:02] Security status [authentication=off, tls/ssl=off] [10:23:03] REST protocols do not start on client node. To start the protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system property. [10:23:24] Topology snapshot [ver=8, servers=1, clients=1, CPUs=112, heap=53.0GB] [10:23:34] Performance suggestions for grid (fix if possible) [10:23:34] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true [10:23:34] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options) [10:23:34] ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]' to JVM options) [10:23:34] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options) [10:23:34] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options) [10:23:34] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning [10:23:34] [10:23:34] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat} [10:23:34] [10:23:34] Ignite node started OK (id=c10d143b) [10:23:34] Topology snapshot [ver=7, servers=1, clients=2, CPUs=168, heap=80.0GB] start creating caches inside caches {xgboostMainCache=IgniteCacheProxy [delegate=GridDhtAtomicCache [deferredUpdateMsgSnd=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$3@66c83fc8, near=null, super=GridDhtCacheAdapter [multiTxHolder=java.lang.ThreadLocal@ae7950d, super=GridDistributedCacheAdapter [super=GridCacheAdapter [locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@6fd1660, clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl@4a6c18ad, aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl@5e8604bf, igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false, igfsDataCacheSize=null, igfsDataSpaceMax=0, asyncOpsSem=java.util.concurrent.Semaphore@20095ab4[Permits = 500], name=xgboostMainCache, size=0]]]], opCtx=null], xgboostTrainedDataColumnSetCache=IgniteCacheProxy [delegate=GridDhtAtomicCache [deferredUpdateMsgSnd=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$3@53e3a87a, near=null, super=GridDhtCacheAdapter [multiTxHolder=java.lang.ThreadLocal@4dafba3e, super=GridDistributedCacheAdapter [super=GridCacheAdapter [locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@546621c4, clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl@621f89b8, aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl@f339eae, igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false, igfsDataCacheSize=null, igfsDataSpaceMax=0, asyncOpsSem=java.util.concurrent.Semaphore@2822c6ff[Permits = 500], name=xgboostTrainedDataColumnSetCache, size=0]]]], opCtx=null]} end creating caches start creating data streamers end creating data streamers Launching Prediction Module 41098 [main] INFO kafka.utils.VerifiableProperties - Verifying properties 41527 [main] INFO kafka.utils.VerifiableProperties - Property auto.offset.reset is overridden to smallest 41528 [main] WARN kafka.utils.VerifiableProperties - Property bootstrap.servers is not valid 41528 [main] INFO kafka.utils.VerifiableProperties - Property group.id is overridden to IgniteGroup_1 41528 [main] INFO kafka.utils.VerifiableProperties - Property zookeeper.connect is overridden to zookeeper.kafka:2181 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:rsrc:slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:rsrc:slf4j-simple-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 42290 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Connecting to zookeeper instance at zookeeper.kafka:2181 42315 [ZkClient-EventThread-145-zookeeper.kafka:2181] INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread. 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=marvel-client-786884fdc8-z679b 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_181 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=marvelClient.jar 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA> 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-862.11.6.el7.x86_64 42332 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root 42333 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root 42333 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/opt/marvel-files 42334 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper.kafka:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@7efb53af 42361 [main] INFO org.I0Itec.zkclient.ZkClient - Waiting for keeper state SyncConnected 47376 [main-SendThread(zookeeper.kafka.svc.cluster.local:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper.kafka.svc.cluster.local/10.109.79.222:2181. Will not attempt to authenticate using SASL (unknown error) 47379 [main-SendThread(zookeeper.kafka.svc.cluster.local:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zookeeper.kafka.svc.cluster.local/10.109.79.222:2181, initiating session 47390 [main-SendThread(zookeeper.kafka.svc.cluster.local:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper.kafka.svc.cluster.local/10.109.79.222:2181, sessionid = 0x1009662f1130001, negotiated timeout = 6000 47394 [main-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected) 47421 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], starting auto committer every 60000 ms 47476 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], begin registering consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 in ZK 47522 [main] INFO kafka.utils.ZKCheckedEphemeral - Creating /consumers/IgniteGroup_1/ids/IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 (is it secure? false) 47539 [main] INFO kafka.utils.ZKCheckedEphemeral - Result of znode creation is: OK 47539 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], end registering consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 in ZK 47549 [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338_watcher_executor] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], starting watcher executor thread for consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 47591 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], begin rebalancing consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 try #0 47826 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], exception during rebalance kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"listener_security_protocol_map":{"OUTSIDE":"PLAINTEXT","PLAINTEXT":"PLAINTEXT"},"endpoints":["OUTSIDE://kafka-0.broker.kafka.svc.cluster.local:9094","PLAINTEXT://kafka-0.broker.kafka.svc.cluster.local:9092"],"jmx_port":5555,"host":"kafka-0.broker.kafka.svc.cluster.local","timestamp":"1537867771273","port":9094,"version":4} at kafka.cluster.Broker$.createBroker(Broker.scala:101) at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:587) at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:585) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.utils.ZkUtils.getCluster(ZkUtils.scala:585) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:645) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:637) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:637) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:637) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:636) at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:977) at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:264) at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:85) at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:97) at org.apache.ignite.stream.kafka.KafkaStreamer.start(KafkaStreamer.java:135) at com.catpain.perc.marvel.models.xgboost.XgboostPrediction.startIgniteKafkaStreamer(XgboostPrediction.java:239) at com.catpain.perc.marvel.models.xgboost.XgboostPrediction.start(XgboostPrediction.java:97) at com.catpain.perc.marvel.models.xgboost.XgboostModel.predict(XgboostModel.java:42) at com.catpain.perc.marvel.Launcher.main(Launcher.java:211) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58) Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.kafka.common.protocol.SecurityProtocol.OUTSIDE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.kafka.common.protocol.SecurityProtocol.valueOf(SecurityProtocol.java:28) at org.apache.kafka.common.protocol.SecurityProtocol.forName(SecurityProtocol.java:89) at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49) at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:90) at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:89) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at kafka.cluster.Broker$.createBroker(Broker.scala:89) ... 28 more 47829 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], end rebalancing consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 try #0 47829 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Rebalancing attempt failed. Clearing the cache before the next rebalancing operation is triggered 47834 [main] INFO kafka.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1537871021697] Stopping leader finder thread 47834 [main] INFO kafka.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1537871021697] Stopping all fetchers 47836 [main] INFO kafka.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1537871021697] All connections stopped 47838 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Cleared all relevant queues for this fetcher 47840 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Cleared the data chunks in all the consumer message iterators 47841 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Committing all offsets after clearing the fetcher queues 49844 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], begin rebalancing consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 try #1 49864 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], exception during rebalance kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"listener_security_protocol_map":{"OUTSIDE":"PLAINTEXT","PLAINTEXT":"PLAINTEXT"},"endpoints":["OUTSIDE://kafka-0.broker.kafka.svc.cluster.local:9094","PLAINTEXT://kafka-0.broker.kafka.svc.cluster.local:9092"],"jmx_port":5555,"host":"kafka-0.broker.kafka.svc.cluster.local","timestamp":"1537867771273","port":9094,"version":4} at kafka.cluster.Broker$.createBroker(Broker.scala:101) at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:587) at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:585) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.utils.ZkUtils.getCluster(ZkUtils.scala:585) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:645) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:637) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:637) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:637) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:636) at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:977) at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:264) at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:85) at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:97) at org.apache.ignite.stream.kafka.KafkaStreamer.start(KafkaStreamer.java:135) at com.catpain.perc.marvel.models.xgboost.XgboostPrediction.startIgniteKafkaStreamer(XgboostPrediction.java:239) at com.catpain.perc.marvel.models.xgboost.XgboostPrediction.start(XgboostPrediction.java:97) at com.catpain.perc.marvel.models.xgboost.XgboostModel.predict(XgboostModel.java:42) at com.catpain.perc.marvel.Launcher.main(Launcher.java:211) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58) Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.kafka.common.protocol.SecurityProtocol.OUTSIDE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.kafka.common.protocol.SecurityProtocol.valueOf(SecurityProtocol.java:28) at org.apache.kafka.common.protocol.SecurityProtocol.forName(SecurityProtocol.java:89) at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49) at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:90) at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:89) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at kafka.cluster.Broker$.createBroker(Broker.scala:89) ... 28 more 53907 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], end rebalancing consumer IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 try #3 53907 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Rebalancing attempt failed. Clearing the cache before the next rebalancing operation is triggered 53907 [main] INFO kafka.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1537871021697] Stopping leader finder thread 53907 [main] INFO kafka.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1537871021697] Stopping all fetchers 53907 [main] INFO kafka.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1537871021697] All connections stopped 53908 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Cleared all relevant queues for this fetcher 53908 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Cleared the data chunks in all the consumer message iterators 53908 [main] INFO kafka.consumer.ZookeeperConsumerConnector - [IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338], Committing all offsets after clearing the fetcher queues Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58) Caused by: kafka.common.ConsumerRebalanceFailedException: IgniteGroup_1_marvel-client-786884fdc8-z679b-1537871016569-2b7b1338 can't rebalance after 4 retries at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:670) at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:977) at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:264) at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:85) at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:97) at org.apache.ignite.stream.kafka.KafkaStreamer.start(KafkaStreamer.java:135) at com.catpain.perc.marvel.models.xgboost.XgboostPrediction.startIgniteKafkaStreamer(XgboostPrediction.java:239) at com.catpain.perc.marvel.models.xgboost.XgboostPrediction.start(XgboostPrediction.java:97) at com.catpain.perc.marvel.models.xgboost.XgboostModel.predict(XgboostModel.java:42) at com.catpain.perc.marvel.Launcher.main(Launcher.java:211) ... 5 more
*I have started Kafka Consumer from Apache Ignite 1.9.0 version(Using Ignite Kafka Streamer). Following are the consumer properties used : * ZOOKEEPER_CONNECT_VALUE=zookeeper.kafka:2181 BOOTSTRAP_SERVERS_VALUE=bootstrap.kafka:9092 AUTO_OFFSET_RESET_VALUE=smallest CONSUMER_GROUP_ID_VALUE=IgniteGroup_1 TOPIC_NAME=PAKDD_27_low_rate *What might be the issue here and how can I resolve it? * -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
