Github user revans2 commented on the pull request:

    https://github.com/apache/storm/pull/600#issuecomment-115269037
  
    @tedxia sorry I am wrong.  hadoop-hdfs itself depends on commons-codec.  
But the issue is around the packaging of auto-hdfs.
    
    For most users who want to use storm-hdfs in their topologies they will 
include it as a dependency and include it as part of an uber-jar.  This will 
pull in commons-codec correctly.  The issue is if you want to just include it 
in the classpath by default.  If all you do is take storm-hdfs and put it in 
the lib directory you will get these errors.  If you don't want your users to 
have to package auto-hdfs with their topology then you need to include 
storm-hdfs, and all of its dependencies in the lib directory or set 
STORM_EXT_CLASSPATH before launching the various daemons.  This is nice because 
it lets you potentially query a hadoop install already on the box for it's 
classpath instead of trying to figure it all out yourself.
    
    Alternatively you could just set it up for storm-hdfs to be on the daemons 
classpath and not pollute the worker classpath.  You can do this by using 
STORM_EXT_CLASSPATH_DAEMON and/or extlib-daemon directory.  What we really need 
is some better documentation about how to install and setup atuo-hdfs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to