Folks,

In current Hadoop Accelerator design we always process user jobs in a
separate classloader called HadoopClassLoader. It is somewhat special
because it always loads Hadoop classes from scratch.

This leads to at least two serious problems:
1) Very high permgen/metaspace load. Workaround - more permgen.
2) Native Hadoop libraries cannot be used. There are quire a few native
methods in Hadoop. Corresponding dll/so files are loaded in static class
initializers. As each HadoopClassLoader loads classes over and over again,
libraries are loaded several times as well. But Java do not allow several
loads of the same native library from different classloader. Result -> JNI
linkage errors. For instance, this affects Snappy compress/decompress
library which is pretty important in Hadoop ecosystem.

Clearly, this isolation with custom class loader was done on purpose. And I
understand why it is important, for example, for user-defined classes.

But why do we load Hadoop classes (e.g. org.apache.hadoop.fs.FileSystem)
multiple times? Does any one has a clue?

Vladimir.

Reply via email to