HI Rameshwar, 
One thing to note is that Drill 1.17 which is due to be released in the next 
week or so has significant updates to the Kafka reader 
(https://github.com/apache/drill/pull/1901 
<https://github.com/apache/drill/pull/1901>).  You might want to wait to try 
with the updated version. 



> On Dec 4, 2019, at 12:48 AM, Rameshwar Mane <[email protected]> wrote:
> 
> Hi Team,
> 
> Any inputs will be very helpful for kafka with kerberos.
> Thanaks and Regards
> Rameshwar Mane
> 
> On Dec 3 2019, at 4:47 pm, Rameshwar Mane <[email protected]> wrote:
>> Hi,
>> I am trying to query kafka using apache drill. Apache drill version is 
>> 1.16.0. The authorization service is MIT Kerberos.
>> 
>> Query :
>> SHOW TABLES IN kafka
>> Kafka Storage plugin :
>> {
>> "type": "kafka",
>> "kafkaConsumerProps": {
>> "enable.auto.commit": "true",
>> "store.kafka.record.reader": 
>> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>> "streams.consumer.default.stream": "/twitter-data",
>> "value.deserializer": 
>> "org.apache.kafka.common.serialization.ByteArrayDeserializer",
>> "group.id": "drill-query-consumer-1",
>> "auto.offset.reset": "earliest",
>> "session.timeout.ms": "300",
>> "bootstrap.servers": "x.xx.x.xx:6667,xx.xx.xx.xx:6667",
>> "key.deserializer": 
>> "org.apache.kafka.common.serialization.ByteArrayDeserializer"
>> },
>> "enabled": true
>> }
>> are there any configurations that are missing for kafka storage plugin in 
>> kerberised cluster.
>> I dont know how it will read the client_jass.conf file how to provide 
>> kerberos principle and keytab path.
>> 
>> Query Response :
>> UserRemoteException : DATA_READ ERROR: Failed to get tables information
>> 
>> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: 
>> Failed to get tables information
>> Failed to construct kafka consumer
>> Fragment 0:0
>> 
>> [Error Id: 73d70db9-0af7-4c9e-8ddd-0ae4f3da440f on xx.xxx.xx.xx.x.xx:31010]
>> Contents of drillbit.log file are :
>> 2019-12-03 06:30:50,545 [2219fee4-8bc2-df23-2191-cf7facebac64:foreman] INFO 
>> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
>> 2219fee4-8bc2-df23-2191-cf7facebac64 issued by drill: show tables in kafka
>> 2019-12-03 06:30:50,633 [2219fee4-8bc2-df23-2191-cf7facebac64:frag:0:0] INFO 
>> o.a.d.e.s.k.s.KafkaMessageSchema - User Error Occurred: Failed to get tables 
>> information (Failed to construct kafka consumer)
>> org.apache.drill.common.exceptions.UserException: DATA_READ ERROR: Failed to 
>> get tables information
>> 
>> Failed to construct kafka consumer
>> [Error Id: 73d70db9-0af7-4c9e-8ddd-0ae4f3da440f ]
>> at 
>> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:630)
>>  ~[drill-common-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.kafka.schema.KafkaMessageSchema.getTableNames(KafkaMessageSchema.java:81)
>>  [drill-storage-kafka-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.AbstractSchema.getTableNamesAndTypes(AbstractSchema.java:299)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Tables.visitTables(InfoSchemaRecordGenerator.java:340)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:254)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:247)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:234)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:58)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:34)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:146) 
>> [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:142) 
>> [drill-java-exec-1.16.0.jar:1.16.0]
>> at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_151]
>> at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_151]
>> at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>>  [hadoop-common-2.7.4.jar:na]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:142)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:137)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:110)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:87) 
>> [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:263)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>>  [drill-common-1.16.0.jar:1.16.0]
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  [na:1.8.0_151]
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  [na:1.8.0_151]
>> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
>> Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka 
>> consumer
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:765)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:633)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:615)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.drill.exec.store.kafka.schema.KafkaMessageSchema.getTableNames(KafkaMessageSchema.java:77)
>>  [drill-storage-kafka-1.16.0.jar:1.16.0]
>> ... 24 common frames omitted
>> Caused by: java.lang.IllegalArgumentException: Heartbeat must be set lower 
>> than the session timeout
>> at 
>> org.apache.kafka.clients.consumer.internals.Heartbeat.<init>(Heartbeat.java:39)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.<init>(AbstractCoordinator.java:142)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.<init>(ConsumerCoordinator.java:113)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:722)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> ... 27 common frames omitted
>> 2019-12-03 06:30:50,633 [2219fee4-8bc2-df23-2191-cf7facebac64:frag:0:0] INFO 
>> o.a.d.e.w.fragment.FragmentExecutor - 
>> 2219fee4-8bc2-df23-2191-cf7facebac64:0:0: State change requested 
>> AWAITING_ALLOCATION --> FAILED
>> 2019-12-03 06:30:50,633 [2219fee4-8bc2-df23-2191-cf7facebac64:frag:0:0] INFO 
>> o.a.d.e.w.fragment.FragmentExecutor - 
>> 2219fee4-8bc2-df23-2191-cf7facebac64:0:0: State change requested FAILED --> 
>> FINISHED
>> 2019-12-03 06:30:50,635 [2219fee4-8bc2-df23-2191-cf7facebac64:frag:0:0] WARN 
>> o.a.d.exec.rpc.control.WorkEventBus - Fragment 
>> 2219fee4-8bc2-df23-2191-cf7facebac64:0:0 manager is not found in the work 
>> bus.
>> 2019-12-03 06:30:50,638 [qtp289525290-53] ERROR 
>> o.a.d.e.server.rest.QueryResources - Query from Web UI Failed: {}
>> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: 
>> Failed to get tables information
>> 
>> Failed to construct kafka consumer
>> Fragment 0:0
>> 
>> [Error Id: 73d70db9-0af7-4c9e-8ddd-0ae4f3da440f on xx.xx.xx.xx:31010]
>> at org.apache.drill.exec.server.rest.QueryWrapper.run(QueryWrapper.java:126) 
>> ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.server.rest.QueryResources.submitQueryJSON(QueryResources.java:74)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.server.rest.QueryResources.submitQuery(QueryResources.java:90)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
>> ~[na:1.8.0_151]
>> at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>  ~[na:1.8.0_151]
>> at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  ~[na:1.8.0_151]
>> at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
>> at 
>> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
>>  [jersey-server-2.25.1.jar:na]
>> at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326) 
>> [jersey-server-2.25.1.jar:na]
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) 
>> [jersey-common-2.25.1.jar:na]
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) 
>> [jersey-common-2.25.1.jar:na]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:315) 
>> [jersey-common-2.25.1.jar:na]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:297) 
>> [jersey-common-2.25.1.jar:na]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:267) 
>> [jersey-common-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
>>  [jersey-common-2.25.1.jar:na]
>> at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) 
>> [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
>>  [jersey-server-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473) 
>> [jersey-container-servlet-core-2.25.1.jar:na]
>> at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427) 
>> [jersey-container-servlet-core-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>>  [jersey-container-servlet-core-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>>  [jersey-container-servlet-core-2.25.1.jar:na]
>> at 
>> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>>  [jersey-container-servlet-core-2.25.1.jar:na]
>> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) 
>> [jetty-servlet-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) 
>> [jetty-servlet-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) 
>> [jetty-security-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.apache.drill.exec.server.rest.auth.DrillHttpSecurityHandlerProvider.handle(DrillHttpSecurityHandlerProvider.java:151)
>>  [drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513) 
>> [jetty-servlet-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>>  [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at org.eclipse.jetty.server.Server.handle(Server.java:539) 
>> [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) 
>> [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) 
>> [jetty-server-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>>  [jetty-io-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) 
>> [jetty-io-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>>  [jetty-io-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>>  [jetty-util-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>>  [jetty-util-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>>  [jetty-util-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>>  [jetty-util-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at 
>> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>>  [jetty-util-9.3.25.v20180904.jar:9.3.25.v20180904]
>> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
>> org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:765)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:633)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:615)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.drill.exec.store.kafka.schema.KafkaMessageSchema.getTableNames(KafkaMessageSchema.java:77)
>>  ~[drill-storage-kafka-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.AbstractSchema.getTableNamesAndTypes(AbstractSchema.java:299)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Tables.visitTables(InfoSchemaRecordGenerator.java:340)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:254)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:247)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:234)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:58)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:34)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:146) 
>> ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:142) 
>> ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at .......(:0) ~[na:na]
>> at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>>  ~[hadoop-common-2.7.4.jar:na]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:142)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:137)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:110)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:87) 
>> ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:263)
>>  ~[drill-java-exec-1.16.0.jar:1.16.0]
>> at 
>> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>>  ~[drill-common-1.16.0.jar:1.16.0]
>> at .......(:0) ~[na:na]
>> Caused by: java.lang.IllegalArgumentException: Heartbeat must be set lower 
>> than the session timeout
>> at 
>> org.apache.kafka.clients.consumer.internals.Heartbeat.<init>(Heartbeat.java:39)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.<init>(AbstractCoordinator.java:142)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.<init>(ConsumerCoordinator.java:113)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> at 
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:722)
>>  ~[kafka-clients-0.11.0.1.jar:na]
>> ... 24 common frames omitted
>> 2019-12-03 06:30:50,643 [2219fee4-8bc2-df23-2191-cf7facebac64:frag:0:0] WARN 
>> o.a.d.e.w.f.QueryStateProcessor - Dropping request to move to COMPLETED 
>> state as query is already at FAILED state (which is terminal).
>> contents of drillbit_queries.json :
>> {"queryId":"2219d5ff-d7f2-54ce-7609-8c6a730dc22d","schema":"","queryText":"show
>>  tables in 
>> kafka","start":1575365119214,"finish":1575365119246,"outcome":"FAILED","username":"drill","remoteAddress":"xx.xx.xx.xxx:51570"}
>> Let me know if i am missing any configurations for kafka storage plugin. It 
>> will be a big help.
>> 
>> Thanks and Regards
>> Rameshwar Mane
>> 
> 

Reply via email to