Or just add to you webapp the hadoop jars (including dependencies) and use the JobClient API to programatically monitor the cluster.
A On Nov 24, 2007 1:25 AM, Ted Dunning <[EMAIL PROTECTED]> wrote: > > The hadoop startup script just makes sure that the classpath has the right > jars in it and that the right environment variables point to the right > configuration files. > > The easiest way to understand what is happening is to edit the hadoop > script, go to the line that runs java, duplicate that line and put an echo > at the beginning of the first copy. This will tell you what command is > actually run. You can then replicate that environment inside tomcat. > > Adding a -x option to the # line at the beginning of the file will have > similar effect but will produce lots of garbage output as well. > > > On 11/23/07 11:50 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: > > > On Fri, Nov 23, 2007 at 11:37:49AM -0800, Ted Dunning wrote: > >> > >> Any java process that can access the machines in the cluster can start > a > >> job. > >> > >> That means that any thread in the tomcat in, say, a servlet could start > a > >> job. This would be no different than any of the standard examples such > as > >> word count. > > > > Sorry, I probably missing something - but the examples are started using > > hadoop startup script? Or do you mean I can just provide the > > configuration for the hadoop cluster(s) and start jobs in the same > > manner using some of internal Hadoop classes which are doing the job > > started from scripts? > >
