Hi,
When i checked in the Job Tracker UI, the job is in retired section and
cannot retrieve any log,
Problem accessing /jobdetailshistory.jsp. Reason:
File
/var/log/hadoop-0.20-mapreduce/history/done/server.epicoders.com_1362996434042_/2013/03/14/000000/job_201303111007_0042_1363246172007_hadoopuser_PigLatin%3Ahbasetable.pig
does not exist
Caused by:
java.io.FileNotFoundException: File
/var/log/hadoop-0.20-mapreduce/history/done/server.epicoders.com_1362996434042_/2013/03/14/000000/job_201303111007_0042_1363246172007_hadoopuser_PigLatin%3Ahbasetable.pig
does not exist
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:468)
But when i checked the logs manually at
/var/log/hadoop-mapreduce.0.2/userlogs/<job_dir> for a similar job, the
stderr and stdout are empty and syslog has no exception/errors.
On 15 March 2013 23:08, Rohini Palaniswamy <[email protected]> wrote:
> Pig 3206 fixed an issue when the hbase cluster was secure. You must be
> facing a different issue. Need to see the stack trace in the hadoop job log
> for the actual error. Click on your workflow in oozie UI and it will show
> the tracker url to the actual hadoop job. Click on it and look at the
> syslog of the map task.
> On Mar 14, 2013 9:20 PM, "Praveen Bysani" <[email protected]> wrote:
>
> > Hi,
> >
> > Following is the configuration from my core-site and hbase-site.xml,
> >
> > <property>
> > <name>hadoop.security.
> > authentication</name>
> > <value>simple</value>
> > </property>
> > <property>
> > <name>hadoop.rpc.protection</name>
> > <value>authentication</value>
> > </property>
> > <property>
> > <name>hadoop.security.auth_to_local</name>
> > <value>DEFAULT</value>
> > </property>
> >
> > So i guess i may not be using a secure hadoop/hbase. I am not sure what
> you
> > meant by the log of pig launcher job of hadoop oozie. Do you mean the log
> > in Job Tracker for this job id ?
> >
> >
> > On 15 March 2013 04:43, Rohini Palaniswamy <[email protected]>
> > wrote:
> >
> > > Hi Praveen,
> > > Are you running a secure cluster - secure hadoop and hbase? Can you
> > > check what is the stacktrace on the pig launcher job log of Hadoop
> Oozie?
> > >
> > > Regards,
> > > Rohini
> > >
> > >
> > > On Thu, Mar 14, 2013 at 2:28 AM, Praveen Bysani <
> [email protected]
> > > >wrote:
> > >
> > > > Hi,
> > > >
> > > > I am trying to run a simple pig script that uses HbaseStorage class
> to
> > > load
> > > > data from a hbase table. The pig script runs perfectly fine when run
> > > > standalone in mapreduce mode. But when i submit it as a action in
> oozie
> > > > workflow, the job always fails. The oozie job log for that workflow
> > gives
> > > > following errrors which is not very useful,
> > > >
> > > > JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[0000004-130312101540251-oozie-oozi-W@pig-node] action
> > completed,
> > > > external ID [job_201303111007_0043]
> > > > 2013-03-14 08:41:22,658 WARN
> > > > org.apache.oozie.action.hadoop.PigActionExecutor: USER[hadoopuser]
> > > GROUP[-]
> > > > TOKEN[] APP[pig-wf] JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[0000004-130312101540251-oozie-oozi-W@pig-node] *Launcher
> > > > ERROR,*reason: Main class
> > > > *[org.apache.oozie.action.hadoop.PigMain], exit code [2]*
> > > > 2013-03-14 08:41:22,833 INFO
> > > org.apache.oozie.command.wf.ActionEndXCommand:
> > > > USER[hadoopuser] GROUP[-] TOKEN[] APP[pig-wf]
> > > > JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[0000004-130312101540251-oozie-oozi-W@pig-node] ERROR is
> > > considered
> > > > as FAILED for SLA
> > > > 2013-03-14 08:41:22,892 INFO
> > > > org.apache.oozie.command.wf.ActionStartXCommand: USER[hadoopuser]
> > > GROUP[-]
> > > > TOKEN[] APP[pig-wf] JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[0000004-130312101540251-oozie-oozi-W@fail] Start action
> > > > [0000004-130312101540251-oozie-oozi-W@fail] with user-retry state :
> > > > userRetryCount [0], userRetryMax [0], userRetryInterval [10]
> > > > 2013-03-14 08:41:22,893 WARN
> > > > org.apache.oozie.command.wf.ActionStartXCommand: USER[hadoopuser]
> > > GROUP[-]
> > > > TOKEN[] APP[pig-wf] JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[0000004-130312101540251-oozie-oozi-W@fail]
> > > > [***0000004-130312101540251-oozie-oozi-W@fail***]Action status=DONE
> > > > 2013-03-14 08:41:22,893 WARN
> > > > org.apache.oozie.command.wf.ActionStartXCommand: USER[hadoopuser]
> > > GROUP[-]
> > > > TOKEN[] APP[pig-wf] JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[0000004-130312101540251-oozie-oozi-W@fail]
> > > > [***0000004-130312101540251-oozie-oozi-W@fail***]Action updated in
> DB!
> > > > 2013-03-14 08:41:22,997 WARN
> > > > org.apache.oozie.command.coord.CoordActionUpdateXCommand:
> > > USER[hadoopuser]
> > > > GROUP[-] TOKEN[] APP[pig-wf]
> JOB[0000004-130312101540251-oozie-oozi-W]
> > > > ACTION[-] E1100: Command precondition does not hold before execution,
> > [,
> > > > coord action is null], Error Code: E1100
> > > >
> > > >
> > > > I did some searching and found that a patch has been deployed for
> this
> > > > issue, based on the discussion here
> > > > https://issues.apache.org/jira/browse/PIG-3206. However i am not
> sure
> > > how
> > > > i
> > > > can use that patch in my case. Someone help me resolve this.
> > > >
> > > > My current installation versions are as follows,
> > > > Hadoop: CDH 4
> > > > Pig: 0.10
> > > > Oozie: 3.3.0
> > > >
> > > > --
> > > > Regards,
> > > > Praveen Bysani
> > > > http://www.praveenbysani.com
> > > >
> > >
> >
> >
> >
> > --
> > Regards,
> > Praveen Bysani
> > http://www.praveenbysani.com
> >
>
--
Regards,
Praveen Bysani
http://www.praveenbysani.com