I have an HDP 1.3 (old, I know...) cluster that's running Pig 0.14 scripts
through Oozie.

There's a rare nuisance that's driving me crazy - the last few lines inin
the oozie pig launcher log state that pig started a map reduce job (or
several) and is keeping track of their progress and it ends there. The
thing is that the m/r jobs are already done but Pig is not aware of that
and so it doesn't proceed to the next m/r jobs in the chain or ends the
process because the last m/r job executed.

Looking at the cluster metrics doesn't show any problem at the cluster
(i.e. job tracker, task trackers and hdfs are all on and they haven't
failed while the pig action was running).

Any idea how to get to the bottom of this?


Reply via email to