Github user mcgilman commented on the issue:
https://github.com/apache/nifi/pull/2519
@moonkev I think there a couple different scenario's here. This is
admittedly pretty complicated, but my understanding is as follows:
- In a case like PutHDFS, snappy compression is supported through the
native hadoop library. In this case, the native snappy library is not loaded.
Issues seen here are do to multiple classloaders attempting to load "hadoop"
native library.
- In a case like CompressContent, snappy support is supplied by the native
snappy library loaded indirectly through snappy-java. In more recent version,
they effectively bypass the native loading issues by copying the native library
to a temp directory and loading it with a UUID appended to the name. This
allows the native library to multiple times (since the name is different each
time).
- In a case like PutHiveStreaming, snappy support is also supplied through
snappy-java. The difference is that its an old version of snappy-java that does
not perform the UUID trick. Force upgrading in this case is tricky since it is
a transitive dependency (PutHiveStreaming -> Hive -> avro -> snappy). The
legacy snappy-java addresses the native library issue by directly injecting the
class that loads the native library into the root classloader. The ultimate
issue here was that multiple processors were attempting to do this loading at
the same time (due to the loading happening in onTrigger).
@mattyb149 Please chime in here if any of my understanding isn't correct.
Hope this helps
---