Hey guys -

Our topology writes data into both HBase and HDFS. Because of that - it has to 
find the location of the Hadoop/HBase masters in the "hadoop-site.xml" and 
"hbase-site.xml" configuration files.

For non-storm applications that we run - we simply make sure to load those 
files from the application's classpath. That way we can run the same program 
both on our lab and on our production environment and have it locate the right 
master servers based on the configuration files.

However - we couldn't find a way to do the same for Storm. We are forced to 
place the files inside the topology's JAR file and hence build two different 
JARs - one for production and one for lab.

Is there a way to make Storm load those files from the Worker server's local 
disks instead of packing them up inside the topology?

Thanks!
Noam

Reply via email to