I really would like this functionality.  Just a brief aside on what this
buys us.  As it stands now, we have a few main extension points:

   - Custom Java parsers are found via fully qualified classname
   - Custom Stellar functions are found via annotation of the class and
   being dropped on the classpath
   - Custom Enrichment adapters (e.g. the geo enrichment) are found via
   incorporating changes into the enrichment topology flux file

Right now the only way to add such things is to compile your code along
with ours and have them placed in our uber jars that we submit to Storm.
What would be better is for us to expose the interfaces and have developers
use them to create their custom functionality, build just their extension
points and the dependencies and drop the jar file in a directory to be
picked up the next time the topology starts.

I like the HDFS idea because it means that we do not have to ensure a 3rd
party jar directory is sync'd across the storm supervisors.  My question is
whether Storm supports pulling external dependencies via HDFS.  Does anyone
know?

Thanks for bringing this up, Mike.

Best,

Casey

On Mon, Sep 19, 2016 at 10:08 AM, Michael Miklavcic <
michael.miklav...@gmail.com> wrote:

> As part of https://issues.apache.org/jira/browse/METRON-356 it is now
> possible to add hbase and hadoop conf to the Storm topology classpath. It
> is also desirable to expand this functionality to sideloading jars for
> Storm topologies. That way, users can add additional dependencies without
> having to recompile/repackage existing jars. One suggestion is to leverage
> HDFS to store custom jars and add them to the topology.classpath. I want to
> open this discussion to the community.
>
> Best,
>
> Mike
>

Reply via email to