Hello all,

I have a Hadoop related application that is integrated with HDFS and is started 
via the command line with "hadoop jar ..."  The amount of data used by the 
application changes from use case to use case, and I have to adjust the JVM 
that is started using the "hadoop jar" command.  Typically, you just set the 
-Xmx and -Xms variables from the "java -jar" command, but this doesn't seem to 
work.

Does anyone know how I can set it?  Note that this is unrelated to the JVM size 
for map and reduce tasks - there is no MapReduce involved in my application.

Thanks in advance!

--Adam

PS - I imagine I can code my application to be hooked into HDFS or read from 
the Hadoop configuration files by hand - but I would prefer Hadoop to do all 
the work for me!

Reply via email to