Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2449
There's another perspective on me: why we need to associate hadoop
classpath to Storm, even Storm is playing as just a client.
Do we need to provide a profile on storm-hdfs so that it can be packaged as
uber jar? If we build the uber jar for storm-hdfs, we can place storm-hdfs uber
jar on the ext-daemon path which contains much less dependencies (don't contain
jersey AFAIK).
Simiar but not closer issues are in storm-autocreds and topology state.
storm-autocreds resolves the issue to creating assembly directory, so users
can link to the ext/ext-daemon path. That enables us to manage dependency more
finer grained but manual afterwards. One huge downside from current is that it
has all the things together as dependencies: HDFS, HBase, Hive, and worse thing
is that storm-hdfs, storm-hbase, storm-hive depends on storm-autocreds so all
the dependencies are coupled now. Looks like it should be fixed.
Topology state can remedy the issue to specify `--artifacts` option while
submitting topology. The option doesn't exist on daemons, and given that it
relies on blobstore, it is not possible for daemons to utilize the option
anyway.
---