Re: Invocation exception

2012-02-29 Thread Mohit Anchlia
Thanks for the example. I did look at the logs and also at the admin page
and all I see is the exception that I posted initially.

I am not sure why adding an extra jar to the classpath in DistributedCache
causes that exception. I tried to look at Configuration code in hadoop.util
package but it doesn't tell much. It looks like it's throwing on this line
configureMethod.invoke(theObject, conf); in below code.


*private* *static* *void* setJobConf(Object theObject, Configuration conf) {

//If JobConf and JobConfigurable are in classpath, AND

//theObject is of type JobConfigurable AND

//conf is of type JobConf then

//invoke configure on theObject

*try* {

Class? jobConfClass =

conf.getClassByName(org.apache.hadoop.mapred.JobConf);

Class? jobConfigurableClass =

conf.getClassByName(org.apache.hadoop.mapred.JobConfigurable);

*if* (jobConfClass.isAssignableFrom(conf.getClass()) 

jobConfigurableClass.isAssignableFrom(theObject.getClass())) {

Method configureMethod =

jobConfigurableClass.getMethod(configure, jobConfClass);

configureMethod.invoke(theObject, conf);

}

} *catch* (ClassNotFoundException e) {

//JobConf/JobConfigurable not in classpath. no need to configure

} *catch* (Exception e) {

*throw* *new* RuntimeException(Error in configuring object, e);

}

}

On Tue, Feb 28, 2012 at 9:25 PM, Harsh J ha...@cloudera.com wrote:

 Mohit,

 If you visit the failed task attempt on the JT Web UI, you can see the
 complete, informative stack trace on it. It would point the exact line
 the trouble came up in and what the real error during the
 configure-phase of task initialization was.

 A simple attempts page goes like the following (replace job ID and
 task ID of course):


 http://host:50030/taskdetails.jsp?jobid=job_201202041249_3964tipid=task_201202041249_3964_m_00

 Once there, find and open the All logs link to see stdout, stderr,
 and syslog of the specific failed task attempt. You'll have more info
 sifting through this to debug your issue.

 This is also explained in Tom's book under the title Debugging a Job
 (p154, Hadoop: The Definitive Guide, 2nd ed.).

 On Wed, Feb 29, 2012 at 1:40 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  It looks like adding this line causes invocation exception. I looked in
  hdfs and I see that file in that path
 
   DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
 conf);
 
  I have similar code for another jar
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
  conf); but this works just fine.
 
 
  On Tue, Feb 28, 2012 at 11:44 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
 
  I commented reducer and combiner both and still I see the same
 exception.
  Could it be because I have 2 jars being added?
 
   On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.com
 wrote:
 
  On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
  wrote:
 
   For some reason I am getting invocation exception and I don't see any
  more
   details other than this exception:
  
   My job is configured as:
  
  
   JobConf conf = *new* JobConf(FormMLProcessor.*class*);
  
   conf.addResource(hdfs-site.xml);
  
   conf.addResource(core-site.xml);
  
   conf.addResource(mapred-site.xml);
  
   conf.set(mapred.reduce.tasks, 0);
  
   conf.setJobName(mlprocessor);
  
   DistributedCache.*addFileToClassPath*(*new*
 Path(/jars/analytics.jar),
   conf);
  
   DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
   conf);
  
   conf.setOutputKeyClass(Text.*class*);
  
   conf.setOutputValueClass(Text.*class*);
  
   conf.setMapperClass(Map.*class*);
  
   conf.setCombinerClass(Reduce.*class*);
  
   conf.setReducerClass(IdentityReducer.*class*);
  
 
  Why would you set the Reducer when the number of reducers is set to
 zero.
  Not sure if this is the real cause.
 
 
  
   conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
  
   conf.setOutputFormat(TextOutputFormat.*class*);
  
   FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
  
   FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
  
   JobClient.*runJob*(conf);
  
   -
   *
  
   java.lang.RuntimeException*: Error in configuring object
  
   at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
   ReflectionUtils.java:93*)
  
   at
  
 
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
  
   at org.apache.hadoop.util.ReflectionUtils.newInstance(*
   ReflectionUtils.java:117*)
  
   at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
  
   at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
  
   at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
  
   at java.security.AccessController.doPrivileged(*Native Method*)
  
   at javax.security.auth.Subject.doAs(*Subject.java:396*)
  
   at org.apache.hadoop.security.UserGroupInformation.doAs(*
   UserGroupInformation.java:1157*)
  
   at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
  
   Caused

Re: Invocation exception

2012-02-29 Thread Harsh J
Mohit,

I'm positive the real exception lies a few scrolls below that message
on the attempt page. Possibly a class not found issue.

The message you see on top is when something throws up an exception
while being configure()-ed. It is most likely a job config or
setup-time issue from your code or from the library code.

On Thu, Mar 1, 2012 at 5:19 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
 Thanks for the example. I did look at the logs and also at the admin page
 and all I see is the exception that I posted initially.

 I am not sure why adding an extra jar to the classpath in DistributedCache
 causes that exception. I tried to look at Configuration code in hadoop.util
 package but it doesn't tell much. It looks like it's throwing on this line
 configureMethod.invoke(theObject, conf); in below code.


 *private* *static* *void* setJobConf(Object theObject, Configuration conf) {

 //If JobConf and JobConfigurable are in classpath, AND

 //theObject is of type JobConfigurable AND

 //conf is of type JobConf then

 //invoke configure on theObject

 *try* {

 Class? jobConfClass =

 conf.getClassByName(org.apache.hadoop.mapred.JobConf);

 Class? jobConfigurableClass =

 conf.getClassByName(org.apache.hadoop.mapred.JobConfigurable);

 *if* (jobConfClass.isAssignableFrom(conf.getClass()) 

 jobConfigurableClass.isAssignableFrom(theObject.getClass())) {

 Method configureMethod =

 jobConfigurableClass.getMethod(configure, jobConfClass);

 configureMethod.invoke(theObject, conf);

 }

 } *catch* (ClassNotFoundException e) {

 //JobConf/JobConfigurable not in classpath. no need to configure

 } *catch* (Exception e) {

 *throw* *new* RuntimeException(Error in configuring object, e);

 }

 }

 On Tue, Feb 28, 2012 at 9:25 PM, Harsh J ha...@cloudera.com wrote:

 Mohit,

 If you visit the failed task attempt on the JT Web UI, you can see the
 complete, informative stack trace on it. It would point the exact line
 the trouble came up in and what the real error during the
 configure-phase of task initialization was.

 A simple attempts page goes like the following (replace job ID and
 task ID of course):


 http://host:50030/taskdetails.jsp?jobid=job_201202041249_3964tipid=task_201202041249_3964_m_00

 Once there, find and open the All logs link to see stdout, stderr,
 and syslog of the specific failed task attempt. You'll have more info
 sifting through this to debug your issue.

 This is also explained in Tom's book under the title Debugging a Job
 (p154, Hadoop: The Definitive Guide, 2nd ed.).

 On Wed, Feb 29, 2012 at 1:40 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  It looks like adding this line causes invocation exception. I looked in
  hdfs and I see that file in that path
 
   DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
 conf);
 
  I have similar code for another jar
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
  conf); but this works just fine.
 
 
  On Tue, Feb 28, 2012 at 11:44 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
 
  I commented reducer and combiner both and still I see the same
 exception.
  Could it be because I have 2 jars being added?
 
   On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.com
 wrote:
 
  On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
  wrote:
 
   For some reason I am getting invocation exception and I don't see any
  more
   details other than this exception:
  
   My job is configured as:
  
  
   JobConf conf = *new* JobConf(FormMLProcessor.*class*);
  
   conf.addResource(hdfs-site.xml);
  
   conf.addResource(core-site.xml);
  
   conf.addResource(mapred-site.xml);
  
   conf.set(mapred.reduce.tasks, 0);
  
   conf.setJobName(mlprocessor);
  
   DistributedCache.*addFileToClassPath*(*new*
 Path(/jars/analytics.jar),
   conf);
  
   DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
   conf);
  
   conf.setOutputKeyClass(Text.*class*);
  
   conf.setOutputValueClass(Text.*class*);
  
   conf.setMapperClass(Map.*class*);
  
   conf.setCombinerClass(Reduce.*class*);
  
   conf.setReducerClass(IdentityReducer.*class*);
  
 
  Why would you set the Reducer when the number of reducers is set to
 zero.
  Not sure if this is the real cause.
 
 
  
   conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
  
   conf.setOutputFormat(TextOutputFormat.*class*);
  
   FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
  
   FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
  
   JobClient.*runJob*(conf);
  
   -
   *
  
   java.lang.RuntimeException*: Error in configuring object
  
   at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
   ReflectionUtils.java:93*)
  
   at
  
 
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
  
   at org.apache.hadoop.util.ReflectionUtils.newInstance(*
   ReflectionUtils.java:117*)
  
   at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387

Re: Invocation exception

2012-02-28 Thread Mohit Anchlia
I commented reducer and combiner both and still I see the same exception.
Could it be because I have 2 jars being added?

On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.com wrote:

 On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

  For some reason I am getting invocation exception and I don't see any
 more
  details other than this exception:
 
  My job is configured as:
 
 
  JobConf conf = *new* JobConf(FormMLProcessor.*class*);
 
  conf.addResource(hdfs-site.xml);
 
  conf.addResource(core-site.xml);
 
  conf.addResource(mapred-site.xml);
 
  conf.set(mapred.reduce.tasks, 0);
 
  conf.setJobName(mlprocessor);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
  conf);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
  conf);
 
  conf.setOutputKeyClass(Text.*class*);
 
  conf.setOutputValueClass(Text.*class*);
 
  conf.setMapperClass(Map.*class*);
 
  conf.setCombinerClass(Reduce.*class*);
 
  conf.setReducerClass(IdentityReducer.*class*);
 

 Why would you set the Reducer when the number of reducers is set to zero.
 Not sure if this is the real cause.


 
  conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
 
  conf.setOutputFormat(TextOutputFormat.*class*);
 
  FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
 
  FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
 
  JobClient.*runJob*(conf);
 
  -
  *
 
  java.lang.RuntimeException*: Error in configuring object
 
  at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
  ReflectionUtils.java:93*)
 
  at
  org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
 
  at org.apache.hadoop.util.ReflectionUtils.newInstance(*
  ReflectionUtils.java:117*)
 
  at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
 
  at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
 
  at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
 
  at java.security.AccessController.doPrivileged(*Native Method*)
 
  at javax.security.auth.Subject.doAs(*Subject.java:396*)
 
  at org.apache.hadoop.security.UserGroupInformation.doAs(*
  UserGroupInformation.java:1157*)
 
  at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
 
  Caused by: *java.lang.reflect.InvocationTargetException
  *
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke(*
  NativeMethodAccessorImpl.java:39*)
 
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
 



Re: Invocation exception

2012-02-28 Thread Mohit Anchlia
It looks like adding this line causes invocation exception. I looked in
hdfs and I see that file in that path

DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar), conf);

I have similar code for another jar
DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
conf); but this works just fine.


On Tue, Feb 28, 2012 at 11:44 AM, Mohit Anchlia mohitanch...@gmail.comwrote:

 I commented reducer and combiner both and still I see the same exception.
 Could it be because I have 2 jars being added?

  On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.comwrote:

 On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

  For some reason I am getting invocation exception and I don't see any
 more
  details other than this exception:
 
  My job is configured as:
 
 
  JobConf conf = *new* JobConf(FormMLProcessor.*class*);
 
  conf.addResource(hdfs-site.xml);
 
  conf.addResource(core-site.xml);
 
  conf.addResource(mapred-site.xml);
 
  conf.set(mapred.reduce.tasks, 0);
 
  conf.setJobName(mlprocessor);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
  conf);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
  conf);
 
  conf.setOutputKeyClass(Text.*class*);
 
  conf.setOutputValueClass(Text.*class*);
 
  conf.setMapperClass(Map.*class*);
 
  conf.setCombinerClass(Reduce.*class*);
 
  conf.setReducerClass(IdentityReducer.*class*);
 

 Why would you set the Reducer when the number of reducers is set to zero.
 Not sure if this is the real cause.


 
  conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
 
  conf.setOutputFormat(TextOutputFormat.*class*);
 
  FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
 
  FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
 
  JobClient.*runJob*(conf);
 
  -
  *
 
  java.lang.RuntimeException*: Error in configuring object
 
  at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
  ReflectionUtils.java:93*)
 
  at
 
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
 
  at org.apache.hadoop.util.ReflectionUtils.newInstance(*
  ReflectionUtils.java:117*)
 
  at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
 
  at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
 
  at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
 
  at java.security.AccessController.doPrivileged(*Native Method*)
 
  at javax.security.auth.Subject.doAs(*Subject.java:396*)
 
  at org.apache.hadoop.security.UserGroupInformation.doAs(*
  UserGroupInformation.java:1157*)
 
  at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
 
  Caused by: *java.lang.reflect.InvocationTargetException
  *
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke(*
  NativeMethodAccessorImpl.java:39*)
 
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
 





Re: Invocation exception

2012-02-28 Thread Harsh J
Mohit,

If you visit the failed task attempt on the JT Web UI, you can see the
complete, informative stack trace on it. It would point the exact line
the trouble came up in and what the real error during the
configure-phase of task initialization was.

A simple attempts page goes like the following (replace job ID and
task ID of course):

http://host:50030/taskdetails.jsp?jobid=job_201202041249_3964tipid=task_201202041249_3964_m_00

Once there, find and open the All logs link to see stdout, stderr,
and syslog of the specific failed task attempt. You'll have more info
sifting through this to debug your issue.

This is also explained in Tom's book under the title Debugging a Job
(p154, Hadoop: The Definitive Guide, 2nd ed.).

On Wed, Feb 29, 2012 at 1:40 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
 It looks like adding this line causes invocation exception. I looked in
 hdfs and I see that file in that path

 DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar), conf);

 I have similar code for another jar
 DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
 conf); but this works just fine.


 On Tue, Feb 28, 2012 at 11:44 AM, Mohit Anchlia mohitanch...@gmail.comwrote:

 I commented reducer and combiner both and still I see the same exception.
 Could it be because I have 2 jars being added?

  On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.comwrote:

 On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

  For some reason I am getting invocation exception and I don't see any
 more
  details other than this exception:
 
  My job is configured as:
 
 
  JobConf conf = *new* JobConf(FormMLProcessor.*class*);
 
  conf.addResource(hdfs-site.xml);
 
  conf.addResource(core-site.xml);
 
  conf.addResource(mapred-site.xml);
 
  conf.set(mapred.reduce.tasks, 0);
 
  conf.setJobName(mlprocessor);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
  conf);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
  conf);
 
  conf.setOutputKeyClass(Text.*class*);
 
  conf.setOutputValueClass(Text.*class*);
 
  conf.setMapperClass(Map.*class*);
 
  conf.setCombinerClass(Reduce.*class*);
 
  conf.setReducerClass(IdentityReducer.*class*);
 

 Why would you set the Reducer when the number of reducers is set to zero.
 Not sure if this is the real cause.


 
  conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
 
  conf.setOutputFormat(TextOutputFormat.*class*);
 
  FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
 
  FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
 
  JobClient.*runJob*(conf);
 
  -
  *
 
  java.lang.RuntimeException*: Error in configuring object
 
  at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
  ReflectionUtils.java:93*)
 
  at
 
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
 
  at org.apache.hadoop.util.ReflectionUtils.newInstance(*
  ReflectionUtils.java:117*)
 
  at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
 
  at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
 
  at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
 
  at java.security.AccessController.doPrivileged(*Native Method*)
 
  at javax.security.auth.Subject.doAs(*Subject.java:396*)
 
  at org.apache.hadoop.security.UserGroupInformation.doAs(*
  UserGroupInformation.java:1157*)
 
  at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
 
  Caused by: *java.lang.reflect.InvocationTargetException
  *
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke(*
  NativeMethodAccessorImpl.java:39*)
 
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
 






-- 
Harsh J


Re: Invocation exception

2012-02-28 Thread Subir S
Sorry I missed this email.
Harsh answer is apt. Please see the error log from Job Tracker web ui for
failed tasks (mapper/reducer) to know the exact reason.

On Tue, Feb 28, 2012 at 10:23 AM, Mohit Anchlia mohitanch...@gmail.comwrote:

 Does it matter if reducer is set even if the no of reducers is 0? Is there
 a way to get more clear reason?

 On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.com
 wrote:

  On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
  wrote:
 
   For some reason I am getting invocation exception and I don't see any
  more
   details other than this exception:
  
   My job is configured as:
  
  
   JobConf conf = *new* JobConf(FormMLProcessor.*class*);
  
   conf.addResource(hdfs-site.xml);
  
   conf.addResource(core-site.xml);
  
   conf.addResource(mapred-site.xml);
  
   conf.set(mapred.reduce.tasks, 0);
  
   conf.setJobName(mlprocessor);
  
   DistributedCache.*addFileToClassPath*(*new*
 Path(/jars/analytics.jar),
   conf);
  
   DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
   conf);
  
   conf.setOutputKeyClass(Text.*class*);
  
   conf.setOutputValueClass(Text.*class*);
  
   conf.setMapperClass(Map.*class*);
  
   conf.setCombinerClass(Reduce.*class*);
  
   conf.setReducerClass(IdentityReducer.*class*);
  
 
  Why would you set the Reducer when the number of reducers is set to zero.
  Not sure if this is the real cause.
 
 
  
   conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
  
   conf.setOutputFormat(TextOutputFormat.*class*);
  
   FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
  
   FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
  
   JobClient.*runJob*(conf);
  
   -
   *
  
   java.lang.RuntimeException*: Error in configuring object
  
   at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
   ReflectionUtils.java:93*)
  
   at
  
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
  
   at org.apache.hadoop.util.ReflectionUtils.newInstance(*
   ReflectionUtils.java:117*)
  
   at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
  
   at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
  
   at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
  
   at java.security.AccessController.doPrivileged(*Native Method*)
  
   at javax.security.auth.Subject.doAs(*Subject.java:396*)
  
   at org.apache.hadoop.security.UserGroupInformation.doAs(*
   UserGroupInformation.java:1157*)
  
   at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
  
   Caused by: *java.lang.reflect.InvocationTargetException
   *
  
   at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)
  
   at sun.reflect.NativeMethodAccessorImpl.invoke(*
   NativeMethodAccessorImpl.java:39*)
  
   at
  
  
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
  
 



Re: Invocation exception

2012-02-27 Thread Subir S
On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.comwrote:

 For some reason I am getting invocation exception and I don't see any more
 details other than this exception:

 My job is configured as:


 JobConf conf = *new* JobConf(FormMLProcessor.*class*);

 conf.addResource(hdfs-site.xml);

 conf.addResource(core-site.xml);

 conf.addResource(mapred-site.xml);

 conf.set(mapred.reduce.tasks, 0);

 conf.setJobName(mlprocessor);

 DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
 conf);

 DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
 conf);

 conf.setOutputKeyClass(Text.*class*);

 conf.setOutputValueClass(Text.*class*);

 conf.setMapperClass(Map.*class*);

 conf.setCombinerClass(Reduce.*class*);

 conf.setReducerClass(IdentityReducer.*class*);


Why would you set the Reducer when the number of reducers is set to zero.
Not sure if this is the real cause.



 conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);

 conf.setOutputFormat(TextOutputFormat.*class*);

 FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));

 FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));

 JobClient.*runJob*(conf);

 -
 *

 java.lang.RuntimeException*: Error in configuring object

 at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
 ReflectionUtils.java:93*)

 at
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)

 at org.apache.hadoop.util.ReflectionUtils.newInstance(*
 ReflectionUtils.java:117*)

 at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)

 at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)

 at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)

 at java.security.AccessController.doPrivileged(*Native Method*)

 at javax.security.auth.Subject.doAs(*Subject.java:396*)

 at org.apache.hadoop.security.UserGroupInformation.doAs(*
 UserGroupInformation.java:1157*)

 at org.apache.hadoop.mapred.Child.main(*Child.java:264*)

 Caused by: *java.lang.reflect.InvocationTargetException
 *

 at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)

 at sun.reflect.NativeMethodAccessorImpl.invoke(*
 NativeMethodAccessorImpl.java:39*)

 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav



Re: Invocation exception

2012-02-27 Thread Mohit Anchlia
Does it matter if reducer is set even if the no of reducers is 0? Is there
a way to get more clear reason?

On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.com wrote:

 On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

  For some reason I am getting invocation exception and I don't see any
 more
  details other than this exception:
 
  My job is configured as:
 
 
  JobConf conf = *new* JobConf(FormMLProcessor.*class*);
 
  conf.addResource(hdfs-site.xml);
 
  conf.addResource(core-site.xml);
 
  conf.addResource(mapred-site.xml);
 
  conf.set(mapred.reduce.tasks, 0);
 
  conf.setJobName(mlprocessor);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/analytics.jar),
  conf);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
  conf);
 
  conf.setOutputKeyClass(Text.*class*);
 
  conf.setOutputValueClass(Text.*class*);
 
  conf.setMapperClass(Map.*class*);
 
  conf.setCombinerClass(Reduce.*class*);
 
  conf.setReducerClass(IdentityReducer.*class*);
 

 Why would you set the Reducer when the number of reducers is set to zero.
 Not sure if this is the real cause.


 
  conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
 
  conf.setOutputFormat(TextOutputFormat.*class*);
 
  FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
 
  FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
 
  JobClient.*runJob*(conf);
 
  -
  *
 
  java.lang.RuntimeException*: Error in configuring object
 
  at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
  ReflectionUtils.java:93*)
 
  at
  org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
 
  at org.apache.hadoop.util.ReflectionUtils.newInstance(*
  ReflectionUtils.java:117*)
 
  at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
 
  at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
 
  at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
 
  at java.security.AccessController.doPrivileged(*Native Method*)
 
  at javax.security.auth.Subject.doAs(*Subject.java:396*)
 
  at org.apache.hadoop.security.UserGroupInformation.doAs(*
  UserGroupInformation.java:1157*)
 
  at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
 
  Caused by: *java.lang.reflect.InvocationTargetException
  *
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke(*
  NativeMethodAccessorImpl.java:39*)
 
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
 



Re: Invocation exception

2012-02-27 Thread Mohit Anchlia
On Mon, Feb 27, 2012 at 8:58 PM, Prashant Kommireddi prash1...@gmail.comwrote:

 Tom White's Definitive Guide book is a great reference. Answers to
 most of your questions could be found there.

 I've been through that book but haven't come accross how to debug this
exception. Can you point me to the topic in that book where I'll find this
information?


 Sent from my iPhone

 On Feb 27, 2012, at 8:54 PM, Mohit Anchlia mohitanch...@gmail.com wrote:

  Does it matter if reducer is set even if the no of reducers is 0? Is
 there
  a way to get more clear reason?
 
  On Mon, Feb 27, 2012 at 8:23 PM, Subir S subir.sasiku...@gmail.com
 wrote:
 
  On Tue, Feb 28, 2012 at 4:30 AM, Mohit Anchlia mohitanch...@gmail.com
  wrote:
 
  For some reason I am getting invocation exception and I don't see any
  more
  details other than this exception:
 
  My job is configured as:
 
 
  JobConf conf = *new* JobConf(FormMLProcessor.*class*);
 
  conf.addResource(hdfs-site.xml);
 
  conf.addResource(core-site.xml);
 
  conf.addResource(mapred-site.xml);
 
  conf.set(mapred.reduce.tasks, 0);
 
  conf.setJobName(mlprocessor);
 
  DistributedCache.*addFileToClassPath*(*new*
 Path(/jars/analytics.jar),
  conf);
 
  DistributedCache.*addFileToClassPath*(*new* Path(/jars/common.jar),
  conf);
 
  conf.setOutputKeyClass(Text.*class*);
 
  conf.setOutputValueClass(Text.*class*);
 
  conf.setMapperClass(Map.*class*);
 
  conf.setCombinerClass(Reduce.*class*);
 
  conf.setReducerClass(IdentityReducer.*class*);
 
 
  Why would you set the Reducer when the number of reducers is set to
 zero.
  Not sure if this is the real cause.
 
 
 
  conf.setInputFormat(SequenceFileAsTextInputFormat.*class*);
 
  conf.setOutputFormat(TextOutputFormat.*class*);
 
  FileInputFormat.*setInputPaths*(conf, *new* Path(args[0]));
 
  FileOutputFormat.*setOutputPath*(conf, *new* Path(args[1]));
 
  JobClient.*runJob*(conf);
 
  -
  *
 
  java.lang.RuntimeException*: Error in configuring object
 
  at org.apache.hadoop.util.ReflectionUtils.setJobConf(*
  ReflectionUtils.java:93*)
 
  at
 
 org.apache.hadoop.util.ReflectionUtils.setConf(*ReflectionUtils.java:64*)
 
  at org.apache.hadoop.util.ReflectionUtils.newInstance(*
  ReflectionUtils.java:117*)
 
  at org.apache.hadoop.mapred.MapTask.runOldMapper(*MapTask.java:387*)
 
  at org.apache.hadoop.mapred.MapTask.run(*MapTask.java:325*)
 
  at org.apache.hadoop.mapred.Child$4.run(*Child.java:270*)
 
  at java.security.AccessController.doPrivileged(*Native Method*)
 
  at javax.security.auth.Subject.doAs(*Subject.java:396*)
 
  at org.apache.hadoop.security.UserGroupInformation.doAs(*
  UserGroupInformation.java:1157*)
 
  at org.apache.hadoop.mapred.Child.main(*Child.java:264*)
 
  Caused by: *java.lang.reflect.InvocationTargetException
  *
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(*Native Method*)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke(*
  NativeMethodAccessorImpl.java:39*)
 
  at
 
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav