rpless opened a new issue #11816:
URL: https://github.com/apache/druid/issues/11816


   ### Affected Version
   
   This issue affects 0.22.0 but also probably affects 0.21.x and 0.20.x.
   
   ### Description
   
   Currently Druid relies on [version 
1.3.3-1](https://github.com/apache/druid/blob/master/pom.xml#L450) of zstd. 
However, the Kafka client used in the Kafka Indexing Service relies on [version 
1.5.0-4](https://github.com/apache/kafka/blob/493280735b418cb7d7cb68cfc2e7f84ca4a552ca/gradle/dependencies.gradle#L117).
   
   When you attempt to ingest data from Kafka that has been compressed with 
zstd it yields the following exception and fails the indexing task:
   
   ```
   java.lang.NoClassDefFoundError: 
com/github/luben/zstd/ZstdInputStreamNoFinalizer
        at 
org.apache.kafka.common.record.CompressionType$5.wrapForInput(CompressionType.java:127)
        at 
org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:262)
        at 
org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:266)
        at 
org.apache.kafka.common.record.DefaultRecordBatch.streamingIterator(DefaultRecordBatch.java:350)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.nextFetchedRecord(Fetcher.java:1575)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1612)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1453)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:686)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:637)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1303)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
        at 
org.apache.druid.indexing.kafka.KafkaRecordSupplier.poll(KafkaRecordSupplier.java:128)
        at 
org.apache.druid.indexing.kafka.IncrementalPublishingKafkaIndexTaskRunner.getRecords(IncrementalPublishingKafkaIndexTaskRunner.java:95)
        at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.runInternal(SeekableStreamIndexTaskRunner.java:599)
        at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.run(SeekableStreamIndexTaskRunner.java:263)
        at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTask.run(SeekableStreamIndexTask.java:146)
        at 
org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:471)
        at 
org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:443)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
   Caused by: java.lang.ClassNotFoundException: 
com.github.luben.zstd.ZstdInputStreamNoFinalizer
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        ... 23 more
   ```  
   
   I have manually linked the 1.5.0-4 zstd jar into a Druid cluster and this 
fixes the issue with the Kafka indexing service, however I also had to replace 
this jar in both the parquet and avro extensions and I have not yet verified 
that they still work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to