[
https://issues.apache.org/jira/browse/KAFKA-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15951626#comment-15951626
]
Matthias J. Sax commented on KAFKA-4988:
----------------------------------------
[[email protected]] This might be a RocksDB issues and not a Kafka Streams
issues. Would you mind, reaching out to RocksDB folks and double check with
them. Cf a similar issue with AIX:
https://github.com/facebook/rocksdb/issues/2071
> JVM crash when running on Alpine Linux
> --------------------------------------
>
> Key: KAFKA-4988
> URL: https://issues.apache.org/jira/browse/KAFKA-4988
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 0.10.2.0
> Reporter: Vincent Rischmann
> Priority: Minor
>
> I'm developing my Kafka Streams application using Docker and I run my jars
> using the official openjdk:8-jre-alpine image.
> I'm just starting to use windowing and now the JVM crashes because of an
> issue with RocksDB I think.
> It's trivial to fix on my part, just use the debian jessie based image.
> However, it would be cool if alpine was supported too since its docker images
> are quite a bit less heavy
> {quote}
> Exception in thread "StreamThread-1" java.lang.UnsatisfiedLinkError:
> /tmp/librocksdbjni3285995384052305662.so: Error loading shared library
> ld-linux-x86-64.so.2: No such file or directory (needed by
> /tmp/librocksdbjni3285995384052305662.so)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method)
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
> at java.lang.Runtime.load0(Runtime.java:809)
> at java.lang.System.load(System.java:1086)
> at
> org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
> at
> org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
> at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
> at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
> at org.rocksdb.Options.<clinit>(Options.java:22)
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:115)
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:148)
> at
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:39)
> at
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
> at
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
> at
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131)
> at
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62)
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:141)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> # SIGSEGV (0xb) at pc=0x00007f60f34ce088, pid=1, tid=0x00007f60f3705ab0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 1.8.0_121-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64
> compressed oops)
> # Derivative: IcedTea 3.3.0
> # Distribution: Custom build (Thu Feb 9 08:34:09 GMT 2017)
> # Problematic frame:
> # C [ld-musl-x86_64.so.1+0x50088] memcpy+0x24
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /usr/local/event-counter/hs_err_pid1.log
> #
> # If you would like to submit a bug report, please include
> # instructions on how to reproduce the bug and visit:
> # http://icedtea.classpath.org/bugzilla
> #
> {quote}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)