Ben -- every non-local Pig run is converted into one or more Hadoop jobs,
so you can always use the jobtracker interface at http://namenode:50030 to
review job status and progress.  The CLI interface to this is via the
command "hadoop job", eg. "hadoop job -list".  Sometimes it's tricky to
match up Pig runs to Hadoop jobs (one-to-many), the "SET job.name"
directive helps a bit.

If you're looking for something higher-level, you might take a look at
LinkedIn's Azkaban, which provides primarily scheduling but also job
dependency visualization.

On Fri, Apr 6, 2012 at 12:39 AM, Benjamin Juhn <[email protected]> wrote:

> Does a job web interface exist somewhere out there?  Something to run
> jobs, pickup results, and maybe a cli?
>
> Thanks,
> Ben

Reply via email to