[
https://issues.apache.org/jira/browse/KAFKA-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15964809#comment-15964809
]
Murad M commented on KAFKA-4988:
--------------------------------
Just faced same problem here. Running application in fully blown ubuntu
environment has no problems. When deployed in official
https://hub.docker.com/_/openjdk/ image using FROM openjdk:8-alpine it crashed
in exactly same way. Quick peek on what is going on shows that:
{quote}
/ # ldd /tmp/librocksdbjni2324596304249162547.so
ldd (0x55ffc73ea000)
libpthread.so.0 => ldd (0x55ffc73ea000)
librt.so.1 => ldd (0x55ffc73ea000)
Error loading shared library libstdc++.so.6: No such file or directory (needed
by /tmp/librocksdbjni2324596304249162547.so)
libm.so.6 => ldd (0x55ffc73ea000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7fe1d1ba3000)
libc.so.6 => ldd (0x55ffc73ea000)
Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(needed by /tmp/librocksdbjni2324596304249162547.so)
Error relocating /tmp/librocksdbjni2324596304249162547.so: _Znam: symbol not
found
Error relocating /tmp/librocksdbjni2324596304249162547.so: _ZNSo3putEc: symbol
not found
Error relocating /tmp/librocksdbjni2324596304249162547.so:
_ZSt18uncaught_exceptionv: symbol not found
Error relocating /tmp/librocksdbjni2324596304249162547.so:
_ZSt29_Rb_tree_insert_and_rebalancebPSt18_Rb_tree_node_baseS0_RS_: symbol not
found
{quote}
I suppose that librocksdbjni comes from rocksdb dependency, and compiled
against libstdc, but {{openjdk:8-alpine}} is packaged with {{libc.musl}}.
Switching to {{openjdk:8}} which is based on debian didn't help me. While above
problem is solved, other problems added in other parts of application also, in
kafka:
{quote}
org.apache.kafka.streams.errors.ProcessorStateException: task directory
[/tmp/kafka-streams/<APP>/0_15] doesn't exist and couldn't be created
at
org.apache.kafka.streams.processor.internals.StateDirectory.directoryForTask(StateDirectory.java:75)
at
org.apache.kafka.streams.processor.internals.StateDirectory.lock(StateDirectory.java:102)
at
org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:205)
at
org.apache.kafka.streams.processor.internals.StreamThread.maybeClean(StreamThread.java:753)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:664)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
{quote}
I suppose that this issue is neither related to Kafka or RocksDB, but to
container environment.
Long story short, I found that https://hub.docker.com/r/wurstmeister/kafka/
image is based on https://hub.docker.com/r/anapsix/alpine-java/ which works
pretty well on server side. So I solved this issue by switching from
{{openjdk}} image to {{anapsix/alpine-java}}.
> JVM crash when running on Alpine Linux
> --------------------------------------
>
> Key: KAFKA-4988
> URL: https://issues.apache.org/jira/browse/KAFKA-4988
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 0.10.2.0
> Reporter: Vincent Rischmann
> Priority: Minor
>
> I'm developing my Kafka Streams application using Docker and I run my jars
> using the official openjdk:8-jre-alpine image.
> I'm just starting to use windowing and now the JVM crashes because of an
> issue with RocksDB I think.
> It's trivial to fix on my part, just use the debian jessie based image.
> However, it would be cool if alpine was supported too since its docker images
> are quite a bit less heavy
> {quote}
> Exception in thread "StreamThread-1" java.lang.UnsatisfiedLinkError:
> /tmp/librocksdbjni3285995384052305662.so: Error loading shared library
> ld-linux-x86-64.so.2: No such file or directory (needed by
> /tmp/librocksdbjni3285995384052305662.so)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method)
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
> at java.lang.Runtime.load0(Runtime.java:809)
> at java.lang.System.load(System.java:1086)
> at
> org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
> at
> org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
> at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
> at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
> at org.rocksdb.Options.<clinit>(Options.java:22)
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:115)
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:148)
> at
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:39)
> at
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
> at
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
> at
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131)
> at
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62)
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:141)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> # SIGSEGV (0xb) at pc=0x00007f60f34ce088, pid=1, tid=0x00007f60f3705ab0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 1.8.0_121-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64
> compressed oops)
> # Derivative: IcedTea 3.3.0
> # Distribution: Custom build (Thu Feb 9 08:34:09 GMT 2017)
> # Problematic frame:
> # C [ld-musl-x86_64.so.1+0x50088] memcpy+0x24
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /usr/local/event-counter/hs_err_pid1.log
> #
> # If you would like to submit a bug report, please include
> # instructions on how to reproduce the bug and visit:
> # http://icedtea.classpath.org/bugzilla
> #
> {quote}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)