[jira] [Resolved] (MAPREDUCE-5667) Error in runtime in mapreduce code

2013-12-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved MAPREDUCE-5667.
---

Resolution: Duplicate

 Error in runtime in mapreduce code
 --

 Key: MAPREDUCE-5667
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5667
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 Hi,
 While executing the code taking input xml file in mapreduce.
 The error occurred is
 Error: java.lang.ClassNotFoundException: org.jdom.input.SAXBuilder
 Error: java.lang.ClassNotFoundException: org.jdom.JDOMException
 I am using hadoop 0.20 and java 1.6
 I used jdom-1.0.jar but still error coming.
 Please help this issue suggest what version jar should i use.
 Thanks in advance.
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5667) Error in runtime in mapreduce code

2013-12-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838751#comment-13838751
 ] 

Steve Loughran commented on MAPREDUCE-5667:
---

Duplicate of the MAPREDUCE-5664 issue you filed yesterday. Please don't file 
multiple JIRAs. 

 Error in runtime in mapreduce code
 --

 Key: MAPREDUCE-5667
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5667
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 Hi,
 While executing the code taking input xml file in mapreduce.
 The error occurred is
 Error: java.lang.ClassNotFoundException: org.jdom.input.SAXBuilder
 Error: java.lang.ClassNotFoundException: org.jdom.JDOMException
 I am using hadoop 0.20 and java 1.6
 I used jdom-1.0.jar but still error coming.
 Please help this issue suggest what version jar should i use.
 Thanks in advance.
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5664) java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException:

2013-12-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838754#comment-13838754
 ] 

Steve Loughran commented on MAPREDUCE-5664:
---

You need to move up to a recent version of Hadoop: the fixes for your problem 
are already in the codebase. 

If you can't move up, look at HADOOP-5254 for one route as to how to go about 
identifying and working round the XML parser versioning issue *on your own 
machine*.


 java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException:
 ---

 Key: MAPREDUCE-5664
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5664
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 Hi,
 I am using hadoop 0.21 vesrsion and java 1.6.  Please help me to fix the 
 issue. What version jar should i put. 
 The sample code with xml i have attached here.
 {code}
 ?xml version=1.0?
 Company
 Employee
 id100/id
 enameranjini/ename
 deptIT/dept
 sal123456/sal
 locationnextlevel/location
 /Employee
 /Company
 {code}
 {code}
 import java.io.IOException;
 import java.util.*;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapred.*;
 import org.apache.hadoop.util.*;
 import java.io.*;
 import org.apache.hadoop.mapred.lib.*;
 import java.io.Reader;
 import java.io.StringReader;
 import org.jdom.Document;
 import org.jdom.Element;
 import org.jdom.JDOMException;
 import org.jdom.input.SAXBuilder;
 public class ParseXml {
   public static class Map extends MapReduceBase implements
   MapperLongWritable, Text, Text, Text {
   
   public void map(LongWritable key, Text value,
   OutputCollectorText, Text output, Reporter 
 reporter) 
   throws IOException {
   
   String s=;
   FileSystem fs=null;
   Configuration conf=new Configuration();
   conf.set(fs.default.name,hdfs://localhost:4440/);
   Path srcpath=new Path(/user/hduser/Ran/);
 try {
   
   String xmlString = value.toString();
  
   SAXBuilder builder = new SAXBuilder();
   Reader in = new StringReader(xmlString);
   Document doc = builder.build(in);
Element root = doc.getRootElement();
   
   s 
 =root.getChild(Employee).getChild(id).getChild(ename).getChild(dept).getChild(sal).getChild(location).getTextTrim();
  output.collect(new Text(),new Text(s));
   
 } catch (Exception e) {
   e.printStackTrace();
 }
 }
 }
   
   public static void main(String[] args) throws Exception {
   
   String input=/user/hduser/Ran/;
   String fileoutput=/user/task/Sales/;
   JobConf conf = new JobConf(ParseXml.class);
   conf.setJobName(file);
   conf.setOutputKeyClass(Text.class);
   conf.setOutputValueClass(Text.class);
   conf.setNumReduceTasks(1);
   conf.setMapperClass(Map.class);
   conf.setInputFormat(TextInputFormat.class);
   conf.setOutputFormat(TextOutputFormat.class);
   FileInputFormat.setInputPaths(conf,input);
   Path outPath = new Path(fileoutput);
   FileOutputFormat.setOutputPath(conf, outPath);
   FileSystem dfs = FileSystem.get(outPath.toUri(), conf);
   if (dfs.exists(outPath)) {
   dfs.delete(outPath, true);
   }
   //conf.setOutputFormat(MultiFileOutput.class);
   JobClient.runJob(conf);
   }
 }
 {code}
 When processing xml file as input via map reduce, the error occurred is 
 {code}
 conf.Configuration: error parsing conf file: 
 javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
 Exception in thread main java.lang.RuntimeException: 
 javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1171)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1030)
   at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
   at org.apache.hadoop.conf.Configuration.get(Configuration.java:382)

[jira] [Resolved] (MAPREDUCE-5664) java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException:

2013-12-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved MAPREDUCE-5664.
---

Resolution: Duplicate

Resolving as duplicate of HADOOP-5254 ; the issue here using out of date XML 
parser is handled in Hadoop 1.2+ . Upgrade Hadoop or configure the JVM to use 
the XML parser built in to java 1.6

 java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException:
 ---

 Key: MAPREDUCE-5664
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5664
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 Hi,
 I am using hadoop 0.21 vesrsion and java 1.6.  Please help me to fix the 
 issue. What version jar should i put. 
 The sample code with xml i have attached here.
 {code}
 ?xml version=1.0?
 Company
 Employee
 id100/id
 enameranjini/ename
 deptIT/dept
 sal123456/sal
 locationnextlevel/location
 /Employee
 /Company
 {code}
 {code}
 import java.io.IOException;
 import java.util.*;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapred.*;
 import org.apache.hadoop.util.*;
 import java.io.*;
 import org.apache.hadoop.mapred.lib.*;
 import java.io.Reader;
 import java.io.StringReader;
 import org.jdom.Document;
 import org.jdom.Element;
 import org.jdom.JDOMException;
 import org.jdom.input.SAXBuilder;
 public class ParseXml {
   public static class Map extends MapReduceBase implements
   MapperLongWritable, Text, Text, Text {
   
   public void map(LongWritable key, Text value,
   OutputCollectorText, Text output, Reporter 
 reporter) 
   throws IOException {
   
   String s=;
   FileSystem fs=null;
   Configuration conf=new Configuration();
   conf.set(fs.default.name,hdfs://localhost:4440/);
   Path srcpath=new Path(/user/hduser/Ran/);
 try {
   
   String xmlString = value.toString();
  
   SAXBuilder builder = new SAXBuilder();
   Reader in = new StringReader(xmlString);
   Document doc = builder.build(in);
Element root = doc.getRootElement();
   
   s 
 =root.getChild(Employee).getChild(id).getChild(ename).getChild(dept).getChild(sal).getChild(location).getTextTrim();
  output.collect(new Text(),new Text(s));
   
 } catch (Exception e) {
   e.printStackTrace();
 }
 }
 }
   
   public static void main(String[] args) throws Exception {
   
   String input=/user/hduser/Ran/;
   String fileoutput=/user/task/Sales/;
   JobConf conf = new JobConf(ParseXml.class);
   conf.setJobName(file);
   conf.setOutputKeyClass(Text.class);
   conf.setOutputValueClass(Text.class);
   conf.setNumReduceTasks(1);
   conf.setMapperClass(Map.class);
   conf.setInputFormat(TextInputFormat.class);
   conf.setOutputFormat(TextOutputFormat.class);
   FileInputFormat.setInputPaths(conf,input);
   Path outPath = new Path(fileoutput);
   FileOutputFormat.setOutputPath(conf, outPath);
   FileSystem dfs = FileSystem.get(outPath.toUri(), conf);
   if (dfs.exists(outPath)) {
   dfs.delete(outPath, true);
   }
   //conf.setOutputFormat(MultiFileOutput.class);
   JobClient.runJob(conf);
   }
 }
 {code}
 When processing xml file as input via map reduce, the error occurred is 
 {code}
 conf.Configuration: error parsing conf file: 
 javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
 Exception in thread main java.lang.RuntimeException: 
 javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1171)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1030)
   at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
   at org.apache.hadoop.conf.Configuration.get(Configuration.java:382)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:109)
 Caused by: 

[jira] [Updated] (MAPREDUCE-5664) java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException:

2013-12-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated MAPREDUCE-5664:
--

  Environment: Java 1.6
Affects Version/s: 0.21.0

 java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException:
 ---

 Key: MAPREDUCE-5664
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5664
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.21.0
 Environment: Java 1.6
Reporter: ranjini

 Hi,
 I am using hadoop 0.21 vesrsion and java 1.6.  Please help me to fix the 
 issue. What version jar should i put. 
 The sample code with xml i have attached here.
 {code}
 ?xml version=1.0?
 Company
 Employee
 id100/id
 enameranjini/ename
 deptIT/dept
 sal123456/sal
 locationnextlevel/location
 /Employee
 /Company
 {code}
 {code}
 import java.io.IOException;
 import java.util.*;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapred.*;
 import org.apache.hadoop.util.*;
 import java.io.*;
 import org.apache.hadoop.mapred.lib.*;
 import java.io.Reader;
 import java.io.StringReader;
 import org.jdom.Document;
 import org.jdom.Element;
 import org.jdom.JDOMException;
 import org.jdom.input.SAXBuilder;
 public class ParseXml {
   public static class Map extends MapReduceBase implements
   MapperLongWritable, Text, Text, Text {
   
   public void map(LongWritable key, Text value,
   OutputCollectorText, Text output, Reporter 
 reporter) 
   throws IOException {
   
   String s=;
   FileSystem fs=null;
   Configuration conf=new Configuration();
   conf.set(fs.default.name,hdfs://localhost:4440/);
   Path srcpath=new Path(/user/hduser/Ran/);
 try {
   
   String xmlString = value.toString();
  
   SAXBuilder builder = new SAXBuilder();
   Reader in = new StringReader(xmlString);
   Document doc = builder.build(in);
Element root = doc.getRootElement();
   
   s 
 =root.getChild(Employee).getChild(id).getChild(ename).getChild(dept).getChild(sal).getChild(location).getTextTrim();
  output.collect(new Text(),new Text(s));
   
 } catch (Exception e) {
   e.printStackTrace();
 }
 }
 }
   
   public static void main(String[] args) throws Exception {
   
   String input=/user/hduser/Ran/;
   String fileoutput=/user/task/Sales/;
   JobConf conf = new JobConf(ParseXml.class);
   conf.setJobName(file);
   conf.setOutputKeyClass(Text.class);
   conf.setOutputValueClass(Text.class);
   conf.setNumReduceTasks(1);
   conf.setMapperClass(Map.class);
   conf.setInputFormat(TextInputFormat.class);
   conf.setOutputFormat(TextOutputFormat.class);
   FileInputFormat.setInputPaths(conf,input);
   Path outPath = new Path(fileoutput);
   FileOutputFormat.setOutputPath(conf, outPath);
   FileSystem dfs = FileSystem.get(outPath.toUri(), conf);
   if (dfs.exists(outPath)) {
   dfs.delete(outPath, true);
   }
   //conf.setOutputFormat(MultiFileOutput.class);
   JobClient.runJob(conf);
   }
 }
 {code}
 When processing xml file as input via map reduce, the error occurred is 
 {code}
 conf.Configuration: error parsing conf file: 
 javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
 Exception in thread main java.lang.RuntimeException: 
 javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1171)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1030)
   at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
   at org.apache.hadoop.conf.Configuration.get(Configuration.java:382)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:109)
 Caused by: javax.xml.parsers.ParserConfigurationException: Feature 
 'http://apache.org/xml/features/xinclude' is not recognized.
   at 
 

[jira] [Created] (MAPREDUCE-5668) Exception in thread main java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected

2013-12-04 Thread ranjini (JIRA)
ranjini created MAPREDUCE-5668:
--

 Summary: Exception in thread main 
java.lang.IncompatibleClassChangeError: Found class 
org.apache.hadoop.mapreduce.JobContext, but interface was expected
 Key: MAPREDUCE-5668
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5668
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini


hi

i have wrote this code , at runtime i got this issue.
Exception in thread main java.lang.IncompatibleClassChangeError: Found class 
org.apache.hadoop.mapreduce.JobContext, but interface was expected
at 
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:170)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at MultiFileWordCount.run(MultiFileWordCount.java:395)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at MultiFileWordCount.main(MultiFileWordCount.java:401)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
hduser@localhost:~$ 


I have attached the code.

import java.io.DataInput;  

import java.io.DataOutput;  

import java.io.IOException;  

import java.util.StringTokenizer;  

import org.apache.hadoop.conf.Configured;  

import org.apache.hadoop.fs.FSDataInputStream;  

import org.apache.hadoop.fs.FileSystem;  

import org.apache.hadoop.fs.Path;  

import org.apache.hadoop.io.IntWritable;  

import org.apache.hadoop.io.Text;  

import org.apache.hadoop.io.WritableComparable;  

import org.apache.hadoop.mapreduce.InputSplit;  

import org.apache.hadoop.mapreduce.Job;  

import org.apache.hadoop.mapreduce.Mapper;  

import org.apache.hadoop.mapreduce.RecordReader;  

import org.apache.hadoop.mapreduce.TaskAttemptContext;  

import org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat;  

import org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader;  

import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;  

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  

import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer;  

import org.apache.hadoop.util.LineReader;  

import org.apache.hadoop.util.Tool;  

import org.apache.hadoop.util.ToolRunner;  


 /**  

  * MultiFileWordCount is an example to demonstrate the usage of   

  * MultiFileInputFormat. This examples counts the occurrences of  

  * words in the text files under the given input directory.  

  */ 

public class MultiFileWordCount extends Configured implements Tool {  

   /**  

* This record keeps filename,offset pairs.  

*/ 

public static class WordOffset implements WritableComparable {  

   private long offset;  

   private String fileName;  
  
   public void readFields(DataInput in) throws IOException {  

  this.offset = in.readLong();  

  this.fileName = Text.readString(in);  

 }  

 public void write(DataOutput out) throws IOException {  

   out.writeLong(offset);  

   Text.writeString(out, fileName);  

 }  

  public int compareTo(Object o) {  

   WordOffset that = (WordOffset)o;  

   int f = this.fileName.compareTo(that.fileName);  

   if(f == 0) {  

 return (int)Math.signum((double)(this.offset - that.offset));  

   }  

   return f;  

 }  

 @Override 

 public boolean equals(Object obj) {  

   if(obj instanceof WordOffset)  

   return this.compareTo(obj) == 0;  

   return false;  

 }  

 @Override 

 public int hashCode() {  

 assert false : hashCode not designed;  

 return 42; //an arbitrary constant  

 }  

   }  

   /**  

* To use {@link CombineFileInputFormat}, one should extend it, to return a  
 

* (custom) {@link RecordReader}. CombineFileInputFormat uses   

* {@link CombineFileSplit}s.   

*/ 

   public static class MyInputFormat   

 extends CombineFileInputFormat  {  

 public RecordReader createRecordReader(InputSplit split,  

  TaskAttemptContext context) throws IOException {  

   return new CombineFileRecordReader(  

 (CombineFileSplit)split, context, CombineFileLineRecordReader.class);  

 }  

   }  

  
   /**  

* RecordReader is 

[jira] [Updated] (MAPREDUCE-5668) Exception in thread main java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected

2013-12-04 Thread ranjini (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ranjini updated MAPREDUCE-5668:
---

Description: 
hi
 pl help

i have wrote this code , at runtime i got this issue.
Exception in thread main java.lang.IncompatibleClassChangeError: Found class 
org.apache.hadoop.mapreduce.JobContext, but interface was expected
at 
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:170)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at MultiFileWordCount.run(MultiFileWordCount.java:395)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at MultiFileWordCount.main(MultiFileWordCount.java:401)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
hduser@localhost:~$ 


I have attached the code.

import java.io.DataInput;  

import java.io.DataOutput;  

import java.io.IOException;  

import java.util.StringTokenizer;  

import org.apache.hadoop.conf.Configured;  

import org.apache.hadoop.fs.FSDataInputStream;  

import org.apache.hadoop.fs.FileSystem;  

import org.apache.hadoop.fs.Path;  

import org.apache.hadoop.io.IntWritable;  

import org.apache.hadoop.io.Text;  

import org.apache.hadoop.io.WritableComparable;  

import org.apache.hadoop.mapreduce.InputSplit;  

import org.apache.hadoop.mapreduce.Job;  

import org.apache.hadoop.mapreduce.Mapper;  

import org.apache.hadoop.mapreduce.RecordReader;  

import org.apache.hadoop.mapreduce.TaskAttemptContext;  

import org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat;  

import org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader;  

import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;  

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  

import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer;  

import org.apache.hadoop.util.LineReader;  

import org.apache.hadoop.util.Tool;  

import org.apache.hadoop.util.ToolRunner;  


 /**  

  * MultiFileWordCount is an example to demonstrate the usage of   

  * MultiFileInputFormat. This examples counts the occurrences of  

  * words in the text files under the given input directory.  

  */ 

public class MultiFileWordCount extends Configured implements Tool {  

   /**  

* This record keeps filename,offset pairs.  

*/ 

public static class WordOffset implements WritableComparable {  

   private long offset;  

   private String fileName;  
  
   public void readFields(DataInput in) throws IOException {  

  this.offset = in.readLong();  

  this.fileName = Text.readString(in);  

 }  

 public void write(DataOutput out) throws IOException {  

   out.writeLong(offset);  

   Text.writeString(out, fileName);  

 }  

  public int compareTo(Object o) {  

   WordOffset that = (WordOffset)o;  

   int f = this.fileName.compareTo(that.fileName);  

   if(f == 0) {  

 return (int)Math.signum((double)(this.offset - that.offset));  

   }  

   return f;  

 }  

 @Override 

 public boolean equals(Object obj) {  

   if(obj instanceof WordOffset)  

   return this.compareTo(obj) == 0;  

   return false;  

 }  

 @Override 

 public int hashCode() {  

 assert false : hashCode not designed;  

 return 42; //an arbitrary constant  

 }  

   }  

   /**  

* To use {@link CombineFileInputFormat}, one should extend it, to return a  
 

* (custom) {@link RecordReader}. CombineFileInputFormat uses   

* {@link CombineFileSplit}s.   

*/ 

   public static class MyInputFormat   

 extends CombineFileInputFormat  {  

 public RecordReader createRecordReader(InputSplit split,  

  TaskAttemptContext context) throws IOException {  

   return new CombineFileRecordReader(  

 (CombineFileSplit)split, context, CombineFileLineRecordReader.class);  

 }  

   }  

  
   /**  

* RecordReader is responsible from extracting records from a chunk  

* of the CombineFileSplit.   

*/ 

   public static class CombineFileLineRecordReader   

 extends RecordReader {  

 private long startOffset; //offset of the 

[jira] [Commented] (MAPREDUCE-5645) TestFixedLengthInputFormat fails with native libs

2013-12-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838807#comment-13838807
 ] 

Hudson commented on MAPREDUCE-5645:
---

FAILURE: Integrated in Hadoop-Yarn-trunk #411 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/411/])
MAPREDUCE-5645. TestFixedLengthInputFormat fails with native libs (Mit Desai 
via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1547624)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFixedLengthInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFixedLengthInputFormat.java


 TestFixedLengthInputFormat fails with native libs
 -

 Key: MAPREDUCE-5645
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5645
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Jonathan Eagles
Assignee: Mit Desai
  Labels: native
 Fix For: 3.0.0, 2.4.0

 Attachments: MAPREDUCE-5645.patch


 mvn clean install -Pnative -DskipTests
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 mvn clean test -Dtest=TestFixedLengthInputFormat
 Running org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
 Tests run: 8, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 39.957 sec 
  FAILURE! - in 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
 testGzipWithTwoInputs(org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat)
   Time elapsed: 0.029 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:397)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.testGzipWithTwoInputs(TestFixedLengthInputFormat.java:229)
 testFormatCompressedIn(org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat)
   Time elapsed: 0.01 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.createFile(TestFixedLengthInputFormat.java:261)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.runRandomTests(TestFixedLengthInputFormat.java:314)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.testFormatCompressedIn(TestFixedLengthInputFormat.java:96)
 Running org.apache.hadoop.mapred.TestFixedLengthInputFormat
 Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 46.151 sec 
  FAILURE! - in org.apache.hadoop.mapred.TestFixedLengthInputFormat
 testPartialRecordCompressedIn(org.apache.hadoop.mapred.TestFixedLengthInputFormat)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:357)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.runPartialRecordTest(TestFixedLengthInputFormat.java:386)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.testPartialRecordCompressedIn(TestFixedLengthInputFormat.java:182)
 testGzipWithTwoInputs(org.apache.hadoop.mapred.TestFixedLengthInputFormat)  
 Time elapsed: 0.009 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:357)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.testGzipWithTwoInputs(TestFixedLengthInputFormat.java:201)
 testFormatCompressedIn(org.apache.hadoop.mapred.TestFixedLengthInputFormat)  
 Time elapsed: 0.017 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 

[jira] [Commented] (MAPREDUCE-5645) TestFixedLengthInputFormat fails with native libs

2013-12-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838872#comment-13838872
 ] 

Hudson commented on MAPREDUCE-5645:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1628 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1628/])
MAPREDUCE-5645. TestFixedLengthInputFormat fails with native libs (Mit Desai 
via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1547624)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFixedLengthInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFixedLengthInputFormat.java


 TestFixedLengthInputFormat fails with native libs
 -

 Key: MAPREDUCE-5645
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5645
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Jonathan Eagles
Assignee: Mit Desai
  Labels: native
 Fix For: 3.0.0, 2.4.0

 Attachments: MAPREDUCE-5645.patch


 mvn clean install -Pnative -DskipTests
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 mvn clean test -Dtest=TestFixedLengthInputFormat
 Running org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
 Tests run: 8, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 39.957 sec 
  FAILURE! - in 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
 testGzipWithTwoInputs(org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat)
   Time elapsed: 0.029 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:397)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.testGzipWithTwoInputs(TestFixedLengthInputFormat.java:229)
 testFormatCompressedIn(org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat)
   Time elapsed: 0.01 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.createFile(TestFixedLengthInputFormat.java:261)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.runRandomTests(TestFixedLengthInputFormat.java:314)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.testFormatCompressedIn(TestFixedLengthInputFormat.java:96)
 Running org.apache.hadoop.mapred.TestFixedLengthInputFormat
 Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 46.151 sec 
  FAILURE! - in org.apache.hadoop.mapred.TestFixedLengthInputFormat
 testPartialRecordCompressedIn(org.apache.hadoop.mapred.TestFixedLengthInputFormat)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:357)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.runPartialRecordTest(TestFixedLengthInputFormat.java:386)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.testPartialRecordCompressedIn(TestFixedLengthInputFormat.java:182)
 testGzipWithTwoInputs(org.apache.hadoop.mapred.TestFixedLengthInputFormat)  
 Time elapsed: 0.009 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:357)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.testGzipWithTwoInputs(TestFixedLengthInputFormat.java:201)
 testFormatCompressedIn(org.apache.hadoop.mapred.TestFixedLengthInputFormat)  
 Time elapsed: 0.017 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 

[jira] [Commented] (MAPREDUCE-5645) TestFixedLengthInputFormat fails with native libs

2013-12-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838890#comment-13838890
 ] 

Hudson commented on MAPREDUCE-5645:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1602 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1602/])
MAPREDUCE-5645. TestFixedLengthInputFormat fails with native libs (Mit Desai 
via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1547624)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFixedLengthInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFixedLengthInputFormat.java


 TestFixedLengthInputFormat fails with native libs
 -

 Key: MAPREDUCE-5645
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5645
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Jonathan Eagles
Assignee: Mit Desai
  Labels: native
 Fix For: 3.0.0, 2.4.0

 Attachments: MAPREDUCE-5645.patch


 mvn clean install -Pnative -DskipTests
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 mvn clean test -Dtest=TestFixedLengthInputFormat
 Running org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
 Tests run: 8, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 39.957 sec 
  FAILURE! - in 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
 testGzipWithTwoInputs(org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat)
   Time elapsed: 0.029 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:397)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.testGzipWithTwoInputs(TestFixedLengthInputFormat.java:229)
 testFormatCompressedIn(org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat)
   Time elapsed: 0.01 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.createFile(TestFixedLengthInputFormat.java:261)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.runRandomTests(TestFixedLengthInputFormat.java:314)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestFixedLengthInputFormat.testFormatCompressedIn(TestFixedLengthInputFormat.java:96)
 Running org.apache.hadoop.mapred.TestFixedLengthInputFormat
 Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 46.151 sec 
  FAILURE! - in org.apache.hadoop.mapred.TestFixedLengthInputFormat
 testPartialRecordCompressedIn(org.apache.hadoop.mapred.TestFixedLengthInputFormat)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:357)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.runPartialRecordTest(TestFixedLengthInputFormat.java:386)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.testPartialRecordCompressedIn(TestFixedLengthInputFormat.java:182)
 testGzipWithTwoInputs(org.apache.hadoop.mapred.TestFixedLengthInputFormat)  
 Time elapsed: 0.009 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:162)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.writeFile(TestFixedLengthInputFormat.java:357)
   at 
 org.apache.hadoop.mapred.TestFixedLengthInputFormat.testGzipWithTwoInputs(TestFixedLengthInputFormat.java:201)
 testFormatCompressedIn(org.apache.hadoop.mapred.TestFixedLengthInputFormat)  
 Time elapsed: 0.017 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:65)
   at 
 

[jira] [Updated] (MAPREDUCE-5655) Remote job submit from windows to a linux hadoop cluster fails due to wrong classpath

2013-12-04 Thread Attila Pados (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Pados updated MAPREDUCE-5655:


Attachment: YARNRunner.patch
MRApps.patch

These 2 patches intend to fix different jobLaunch/jobExecution OS problems with 
MapReduce. It also reqires to add the following property to 
mapred-site.xml (or mapred-default.xml):

property
  namemapred.remote.os/name
  valueLinux/value
  descriptionRemote MapReduce framework's OS, can be either Linux or 
Windows/description
/property

This is used when job launched from windows, executed in Linux.

 Remote job submit from windows to a linux hadoop cluster fails due to wrong 
 classpath
 -

 Key: MAPREDUCE-5655
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5655
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client, job submission
Affects Versions: 2.2.0
 Environment: Client machine is a Windows 7 box, with Eclipse
 Remote: there is a multi node hadoop cluster, installed on Ubuntu boxes (any 
 linux)
Reporter: Attila Pados
 Attachments: MRApps.patch, YARNRunner.patch


 I was trying to run a java class on my client, windows 7 developer 
 environment, which submits a job to the remote Hadoop cluster, initiates a 
 mapreduce there, and then downloads the results back to the local machine.
 General use case is to use hadoop services from a web application installed 
 on a non-cluster computer, or as part of a developer environment.
 The problem was, that the ApplicationMaster's startup shell script 
 (launch_container.sh) was generated with wrong CLASSPATH entry. Together with 
 the java process call on the bottom of the file, these entries were generated 
 in windows style, using % as shell variable marker and ; as the CLASSPATH 
 delimiter.
 I tracked down the root cause, and found that the MrApps.java, and the 
 YarnRunner.java classes create these entries, and is passed forward to the 
 ApplicationMaster, assuming that the OS that runs these classes will match 
 the one running the ApplicationMaster. But it's not the case, these are in 2 
 different jvm, and also the OS can be different, the strings are generated 
 based on the client/submitter side's OS.
 I made some workaround changes to these 2 files, so i could launch my job, 
 however there may be more problems ahead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5655) Remote job submit from windows to a linux hadoop cluster fails due to wrong classpath

2013-12-04 Thread Attila Pados (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838908#comment-13838908
 ] 

Attila Pados commented on MAPREDUCE-5655:
-

I checked the patches of MAPREDUCE-4052, the patch for ContainerLaunch.java 
finds the part to modify in the 136th line. The same code in the 2.2.0 version 
begins at 173th line. So, i think the patch for 0.23.x is not compatible with 
2.2.0, and vice versa. The 2 bugs originate from the same problem, but they are 
also different, the first problem i faced was that the launch_container.sh 
tried to start java with %JAVA_HOME% and i got  a task returned with -1 or 
similar error, because the shell script could not be executed. 

My patch fixes this issue too.

There is a config entry in mapred-default.xml:

property
  description.../description
   namemapreduce.application.classpath/name
   value$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,
  $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*./value
/property

This probably needs to be altered also, due to the $ - %--% difference between 
windows- linux as well.
If you set up your environment to run the job in local/windows, you need to set 
it it to %, but when the job will run on a linux cluster, it must be set back 
to the $ which is the default i think.

The patch may cause the failure of a unit test, if the mapred.remote.os is set, 
and the test runs a job in local mode. This is not tested by me, because i had 
several other issues with running the unit tests, so i skipped this.

I have to repeat, that this is merely a workaround, probably more deep changes 
would be needed, that the launch_container.sh would be created by a java 
component running on the cluster side instead of the client side, but i don't 
feel myself capable of doing that. 

 Remote job submit from windows to a linux hadoop cluster fails due to wrong 
 classpath
 -

 Key: MAPREDUCE-5655
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5655
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client, job submission
Affects Versions: 2.2.0
 Environment: Client machine is a Windows 7 box, with Eclipse
 Remote: there is a multi node hadoop cluster, installed on Ubuntu boxes (any 
 linux)
Reporter: Attila Pados
 Attachments: MRApps.patch, YARNRunner.patch


 I was trying to run a java class on my client, windows 7 developer 
 environment, which submits a job to the remote Hadoop cluster, initiates a 
 mapreduce there, and then downloads the results back to the local machine.
 General use case is to use hadoop services from a web application installed 
 on a non-cluster computer, or as part of a developer environment.
 The problem was, that the ApplicationMaster's startup shell script 
 (launch_container.sh) was generated with wrong CLASSPATH entry. Together with 
 the java process call on the bottom of the file, these entries were generated 
 in windows style, using % as shell variable marker and ; as the CLASSPATH 
 delimiter.
 I tracked down the root cause, and found that the MrApps.java, and the 
 YarnRunner.java classes create these entries, and is passed forward to the 
 ApplicationMaster, assuming that the OS that runs these classes will match 
 the one running the ApplicationMaster. But it's not the case, these are in 2 
 different jvm, and also the OS can be different, the strings are generated 
 based on the client/submitter side's OS.
 I made some workaround changes to these 2 files, so i could launch my job, 
 however there may be more problems ahead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-4052) Windows eclipse cannot submit job from Windows client to Linux/Unix Hadoop cluster.

2013-12-04 Thread Attila Pados (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838918#comment-13838918
 ] 

Attila Pados commented on MAPREDUCE-4052:
-

I checked the patches here, i was curious about alternative solutions to the 
similar (or same) problem with 2.2.0.
However, i missed the point in either the patches or in the problem 
description, that the java call from the shell script included the right 
$JAVA_HOME for linux cluster, or it was %JAVA_HOME% as i have experienced on 
2.2.0 version?

(I didn't investigated the case with 0.23.x at all, i have downloaded only the 
2.2.0 source and binary, so probably there are more differences between the 
processes)

 Windows eclipse cannot submit job from Windows client to Linux/Unix Hadoop 
 cluster.
 ---

 Key: MAPREDUCE-4052
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4052
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: job submission
Affects Versions: 0.23.1, 2.2.0
 Environment: client on the Windows, the the cluster on the suse
Reporter: xieguiming
Assignee: xieguiming
 Attachments: MAPREDUCE-4052-0.patch, MAPREDUCE-4052.patch


 when I use the eclipse on the windows to submit the job. and the 
 applicationmaster throw the exception:
 Exception in thread main java.lang.NoClassDefFoundError: 
 org/apache/hadoop/mapreduce/v2/app/MRAppMaster
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
 Could not find the main class: 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.  Program will exit.
 The reasion is :
 class Apps addToEnvironment function, use the
 private static final String SYSTEM_PATH_SEPARATOR =
   System.getProperty(path.separator);
 and will result the MRApplicationMaster classpath use the ; separator.
 I suggest that nodemanger do the replace.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5668) Exception in thread main java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected

2013-12-04 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838917#comment-13838917
 ] 

Jason Lowe commented on MAPREDUCE-5668:
---

This is something more appropriately asked on the [user@ mailing 
list|http://hadoop.apache.org/mailing_lists.html#User].  JIRAs are for 
reporting bugs against Hadoop, and this does not appear to be a bug.  It looks 
like the code has been compiled against a 2.x release but then run against a 
1.x release, as org.apache.hadoop.mapreduce.JobContext changed from a class to 
an interface between the 1.x and 2.x releases.

The org.apache.hadoop.mapreduce.* API is only guaranteed to be source, not 
binary, compatible between the 1.x and 2.x releases.   See the [binary 
compatibility 
document|http://hadoop.apache.org/docs/r2.2.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html]
 for more details.  Also we cannot generally support compiling against a later 
release and then running on an earlier release, because new APIs could have 
been added and would not appear in the older release.

 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 class org.apache.hadoop.mapreduce.JobContext, but interface was expected
 -

 Key: MAPREDUCE-5668
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5668
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 hi
  pl help
 i have wrote this code , at runtime i got this issue.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 class org.apache.hadoop.mapreduce.JobContext, but interface was expected
   at 
 org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:170)
   at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
   at MultiFileWordCount.run(MultiFileWordCount.java:395)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
   at MultiFileWordCount.main(MultiFileWordCount.java:401)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 hduser@localhost:~$ 
 I have attached the code.
 import java.io.DataInput;  
 import java.io.DataOutput;  
 import java.io.IOException;  
 import java.util.StringTokenizer;  
 import org.apache.hadoop.conf.Configured;  
 import org.apache.hadoop.fs.FSDataInputStream;  
 import org.apache.hadoop.fs.FileSystem;  
 import org.apache.hadoop.fs.Path;  
 import org.apache.hadoop.io.IntWritable;  
 import org.apache.hadoop.io.Text;  
 import org.apache.hadoop.io.WritableComparable;  
 import org.apache.hadoop.mapreduce.InputSplit;  
 import org.apache.hadoop.mapreduce.Job;  
 import org.apache.hadoop.mapreduce.Mapper;  
 import org.apache.hadoop.mapreduce.RecordReader;  
 import org.apache.hadoop.mapreduce.TaskAttemptContext;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;  
 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
 import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer;  
 import org.apache.hadoop.util.LineReader;  
 import org.apache.hadoop.util.Tool;  
 import org.apache.hadoop.util.ToolRunner;  
  /**  
   * MultiFileWordCount is an example to demonstrate the usage of   
   * MultiFileInputFormat. This examples counts the occurrences of  
   * words in the text files under the given input directory.  
   */ 
 public class MultiFileWordCount extends Configured implements Tool {  
/**  
 * This record keeps filename,offset pairs.  
 */ 
 public static class WordOffset implements WritableComparable {  
private long offset;  
private String fileName;  
   
public void readFields(DataInput in) throws IOException {  
   this.offset = in.readLong();  
   this.fileName = Text.readString(in);  
  }  
  public void write(DataOutput out) throws IOException {  
out.writeLong(offset);  

[jira] [Commented] (MAPREDUCE-5655) Remote job submit from windows to a linux hadoop cluster fails due to wrong classpath

2013-12-04 Thread Attila Pados (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838923#comment-13838923
 ] 

Attila Pados commented on MAPREDUCE-5655:
-

the vice versa case: linux client, windows cluster is not handled by this patch

 Remote job submit from windows to a linux hadoop cluster fails due to wrong 
 classpath
 -

 Key: MAPREDUCE-5655
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5655
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client, job submission
Affects Versions: 2.2.0
 Environment: Client machine is a Windows 7 box, with Eclipse
 Remote: there is a multi node hadoop cluster, installed on Ubuntu boxes (any 
 linux)
Reporter: Attila Pados
 Attachments: MRApps.patch, YARNRunner.patch


 I was trying to run a java class on my client, windows 7 developer 
 environment, which submits a job to the remote Hadoop cluster, initiates a 
 mapreduce there, and then downloads the results back to the local machine.
 General use case is to use hadoop services from a web application installed 
 on a non-cluster computer, or as part of a developer environment.
 The problem was, that the ApplicationMaster's startup shell script 
 (launch_container.sh) was generated with wrong CLASSPATH entry. Together with 
 the java process call on the bottom of the file, these entries were generated 
 in windows style, using % as shell variable marker and ; as the CLASSPATH 
 delimiter.
 I tracked down the root cause, and found that the MrApps.java, and the 
 YarnRunner.java classes create these entries, and is passed forward to the 
 ApplicationMaster, assuming that the OS that runs these classes will match 
 the one running the ApplicationMaster. But it's not the case, these are in 2 
 different jvm, and also the OS can be different, the strings are generated 
 based on the client/submitter side's OS.
 I made some workaround changes to these 2 files, so i could launch my job, 
 however there may be more problems ahead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (MAPREDUCE-5666) org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java(org/apache/hadoop/mapreduce/lib/input:FileInputFormat.java):cannot find symbol

2013-12-04 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved MAPREDUCE-5666.
---

Resolution: Invalid

As I mentioned in MAPREDUCE-5668, JIRA is not the avenue to use for questions 
like this.  Please use the [user@ mailing 
list|http://hadoop.apache.org/mailing_lists.html#User] for questions like this. 
 If after discussing on the mailing list it ends up being a bug in Hadoop then 
a JIRA can be filed at that time.

This is clearly a case of code being compiled against a release after 1.x but 
then run on a 0.20 or 1.x release.  The FileStatus.isDirectory() method was not 
present in the 0.20 or 1.x releases.  We cannot generally support compiling 
code against a later release and then running it on an earlier release because 
of new APIs that can be added.

 org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java(org/apache/hadoop/mapreduce/lib/input:FileInputFormat.java):cannot
  find symbol
 -

 Key: MAPREDUCE-5666
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5666
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 hi 
 I have written the below code , and facing the issue. i am using hadoop 0.20 
 vesion and java 1.6 the issue is 
 org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java(org/apache/hadoop/mapreduce/lib/input:FileInputFormat.java):232:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
   if (globStat.isDirectory()) {
   ^
 org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java(org/apache/hadoop/mapreduce/lib/output:FileOutputCommitter.java):208:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
 } else if(fs.getFileStatus(taskOutput).isDirectory()) {
   ^
 org/apache/hadoop/mapred/JobConf.java(org/apache/hadoop/mapred:JobConf.java):433:
  cannot find symbol
 symbol  : method getPattern(java.lang.String,java.util.regex.Pattern)
 location: class org.apache.hadoop.mapred.JobConf
 return getPattern(JobContext.JAR_UNPACK_PATTERN, 
 UNPACK_JAR_PATTERN_DEFAULT);
^
 org/apache/hadoop/mapred/JobConf.java(org/apache/hadoop/mapred:JobConf.java):450:
  cannot find symbol
 symbol  : method getTrimmedStrings(java.lang.String)
 location: class org.apache.hadoop.mapred.JobConf
 return getTrimmedStrings(MRConfig.LOCAL_DIR);
^
 org/apache/hadoop/mapred/FileInputFormat.java(org/apache/hadoop/mapred:FileInputFormat.java):165:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
   if (stat.isDirectory()) {
   ^
 org/apache/hadoop/mapred/FileInputFormat.java(org/apache/hadoop/mapred:FileInputFormat.java):215:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
   if (globStat.isDirectory()) {
   ^
 org/apache/hadoop/mapred/FileInputFormat.java(org/apache/hadoop/mapred:FileInputFormat.java):218:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
   if (recursive  stat.isDirectory()) {
^
 org/apache/hadoop/mapred/FileInputFormat.java(org/apache/hadoop/mapred:FileInputFormat.java):258:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
   if (file.isDirectory()) {
   ^
 org/apache/hadoop/mapred/FileOutputCommitter.java(org/apache/hadoop/mapred:FileOutputCommitter.java):166:
  cannot find symbol
 symbol  : method isDirectory()
 location: class org.apache.hadoop.fs.FileStatus
 } else if(fs.getFileStatus(taskOutput).isDirectory()) {
   ^
 org/apache/hadoop/mapred/LineRecordReader.java(org/apache/hadoop/mapred:LineRecordReader.java):100:
  incompatible types
 found   : org.apache.hadoop.io.compress.SplitCompressionInputStream
 required: org.apache.hadoop.fs.Seekable
 filePosition = cIn; // take pos from compressed stream
^
 org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java(org/apache/hadoop/mapreduce/lib/input:LineRecordReader.java):98:
  incompatible types
 found   : org.apache.hadoop.io.compress.SplitCompressionInputStream
 required: org.apache.hadoop.fs.Seekable
 filePosition = cIn;
 I have attached the code 
 import java.io.DataInput;  
 import java.io.DataOutput;  
 import java.io.IOException;  
 import java.util.StringTokenizer;  
 import org.apache.hadoop.conf.Configured;  
 import org.apache.hadoop.fs.FSDataInputStream;  
 import 

[jira] [Resolved] (MAPREDUCE-5668) Exception in thread main java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected

2013-12-04 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved MAPREDUCE-5668.
---

Resolution: Invalid

Closing this as invalid based on the evidence from MAPREDUCE-5667.  The code is 
being compiled against a later release but then run on an earlier release.  The 
code should be compiled against the Hadoop release being used or an earlier 
release, keeping in mind the binary compatibility document guidelines.

 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 class org.apache.hadoop.mapreduce.JobContext, but interface was expected
 -

 Key: MAPREDUCE-5668
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5668
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 hi
  pl help
 i have wrote this code , at runtime i got this issue.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 class org.apache.hadoop.mapreduce.JobContext, but interface was expected
   at 
 org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:170)
   at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
   at MultiFileWordCount.run(MultiFileWordCount.java:395)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
   at MultiFileWordCount.main(MultiFileWordCount.java:401)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 hduser@localhost:~$ 
 I have attached the code.
 import java.io.DataInput;  
 import java.io.DataOutput;  
 import java.io.IOException;  
 import java.util.StringTokenizer;  
 import org.apache.hadoop.conf.Configured;  
 import org.apache.hadoop.fs.FSDataInputStream;  
 import org.apache.hadoop.fs.FileSystem;  
 import org.apache.hadoop.fs.Path;  
 import org.apache.hadoop.io.IntWritable;  
 import org.apache.hadoop.io.Text;  
 import org.apache.hadoop.io.WritableComparable;  
 import org.apache.hadoop.mapreduce.InputSplit;  
 import org.apache.hadoop.mapreduce.Job;  
 import org.apache.hadoop.mapreduce.Mapper;  
 import org.apache.hadoop.mapreduce.RecordReader;  
 import org.apache.hadoop.mapreduce.TaskAttemptContext;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;  
 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
 import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer;  
 import org.apache.hadoop.util.LineReader;  
 import org.apache.hadoop.util.Tool;  
 import org.apache.hadoop.util.ToolRunner;  
  /**  
   * MultiFileWordCount is an example to demonstrate the usage of   
   * MultiFileInputFormat. This examples counts the occurrences of  
   * words in the text files under the given input directory.  
   */ 
 public class MultiFileWordCount extends Configured implements Tool {  
/**  
 * This record keeps filename,offset pairs.  
 */ 
 public static class WordOffset implements WritableComparable {  
private long offset;  
private String fileName;  
   
public void readFields(DataInput in) throws IOException {  
   this.offset = in.readLong();  
   this.fileName = Text.readString(in);  
  }  
  public void write(DataOutput out) throws IOException {  
out.writeLong(offset);  
Text.writeString(out, fileName);  
  }  
   public int compareTo(Object o) {  
WordOffset that = (WordOffset)o;  
int f = this.fileName.compareTo(that.fileName);  
if(f == 0) {  
  return (int)Math.signum((double)(this.offset - that.offset));  
}  
return f;  
  }  
  @Override 
  public boolean equals(Object obj) {  
if(obj instanceof WordOffset)  
return this.compareTo(obj) == 0;  
return false;  
  }  
  @Override 
  public int hashCode() {  
  assert false : hashCode not designed;  
  return 42; //an arbitrary constant  
  }  
}  
   

[jira] [Updated] (MAPREDUCE-5655) Remote job submit from windows to a linux hadoop cluster fails due to wrong classpath

2013-12-04 Thread Attila Pados (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Pados updated MAPREDUCE-5655:


Description: 
I was trying to run a java class on my client, windows 7 developer environment, 
which submits a job to the remote Hadoop cluster, initiates a mapreduce there, 
and then downloads the results back to the local machine.

General use case is to use hadoop services from a web application installed on 
a non-cluster computer, or as part of a developer environment.

The problem was, that the ApplicationMaster's startup shell script 
(launch_container.sh) was generated with wrong CLASSPATH entry. Together with 
the java process call on the bottom of the file, these entries were generated 
in windows style, using % as shell variable marker and ; as the CLASSPATH 
delimiter.

I tracked down the root cause, and found that the MrApps.java, and the 
YarnRunner.java classes create these entries, and is passed forward to the 
ApplicationMaster, assuming that the OS that runs these classes will match the 
one running the ApplicationMaster. But it's not the case, these are in 2 
different jvm, and also the OS can be different, the strings are generated 
based on the client/submitter side's OS.

I made some workaround changes to these 2 files, so i could launch my job, 
however there may be more problems ahead.

update
 error message:
13/12/04 16:33:15 INFO mapreduce.Job: Job job_1386170530016_0001 failed with 
state FAILED due to: Application application_1386170530016_0001 failed 2 times 
due to AM Container for appattempt_1386170530016_0001_02 exited with  
exitCode: 1 due to: Exception from container-launch: 
org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job 
control

at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

  was:
I was trying to run a java class on my client, windows 7 developer environment, 
which submits a job to the remote Hadoop cluster, initiates a mapreduce there, 
and then downloads the results back to the local machine.

General use case is to use hadoop services from a web application installed on 
a non-cluster computer, or as part of a developer environment.

The problem was, that the ApplicationMaster's startup shell script 
(launch_container.sh) was generated with wrong CLASSPATH entry. Together with 
the java process call on the bottom of the file, these entries were generated 
in windows style, using % as shell variable marker and ; as the CLASSPATH 
delimiter.

I tracked down the root cause, and found that the MrApps.java, and the 
YarnRunner.java classes create these entries, and is passed forward to the 
ApplicationMaster, assuming that the OS that runs these classes will match the 
one running the ApplicationMaster. But it's not the case, these are in 2 
different jvm, and also the OS can be different, the strings are generated 
based on the client/submitter side's OS.

I made some workaround changes to these 2 files, so i could launch my job, 
however there may be more problems ahead.


 Remote job submit from windows to a linux hadoop cluster fails due to wrong 
 classpath
 -

 Key: MAPREDUCE-5655
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5655
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client, job submission
Affects Versions: 2.2.0
 Environment: Client machine is a Windows 7 box, with Eclipse
 Remote: there is a multi node hadoop cluster, installed on Ubuntu boxes (any 
 linux)
Reporter: Attila Pados
 Attachments: MRApps.patch, YARNRunner.patch


 I was trying to run a java class on my client, windows 7 developer 
 environment, which submits a job to the remote Hadoop cluster, initiates a 
 mapreduce there, and then downloads the results back to the local machine.
 General use case is to use hadoop services from a web application 

[jira] [Commented] (MAPREDUCE-5655) Remote job submit from windows to a linux hadoop cluster fails due to wrong classpath

2013-12-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839082#comment-13839082
 ] 

Chris Nauroth commented on MAPREDUCE-5655:
--

Regarding the {{mapreduce.application.classpath}} configuration property, this 
part was already fixed in MAPREDUCE-5442.

 Remote job submit from windows to a linux hadoop cluster fails due to wrong 
 classpath
 -

 Key: MAPREDUCE-5655
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5655
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client, job submission
Affects Versions: 2.2.0
 Environment: Client machine is a Windows 7 box, with Eclipse
 Remote: there is a multi node hadoop cluster, installed on Ubuntu boxes (any 
 linux)
Reporter: Attila Pados
 Attachments: MRApps.patch, YARNRunner.patch


 I was trying to run a java class on my client, windows 7 developer 
 environment, which submits a job to the remote Hadoop cluster, initiates a 
 mapreduce there, and then downloads the results back to the local machine.
 General use case is to use hadoop services from a web application installed 
 on a non-cluster computer, or as part of a developer environment.
 The problem was, that the ApplicationMaster's startup shell script 
 (launch_container.sh) was generated with wrong CLASSPATH entry. Together with 
 the java process call on the bottom of the file, these entries were generated 
 in windows style, using % as shell variable marker and ; as the CLASSPATH 
 delimiter.
 I tracked down the root cause, and found that the MrApps.java, and the 
 YarnRunner.java classes create these entries, and is passed forward to the 
 ApplicationMaster, assuming that the OS that runs these classes will match 
 the one running the ApplicationMaster. But it's not the case, these are in 2 
 different jvm, and also the OS can be different, the strings are generated 
 based on the client/submitter side's OS.
 I made some workaround changes to these 2 files, so i could launch my job, 
 however there may be more problems ahead.
 update
  error message:
 13/12/04 16:33:15 INFO mapreduce.Job: Job job_1386170530016_0001 failed with 
 state FAILED due to: Application application_1386170530016_0001 failed 2 
 times due to AM Container for appattempt_1386170530016_0001_02 exited 
 with  exitCode: 1 due to: Exception from container-launch: 
 org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job 
 control
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
   at org.apache.hadoop.util.Shell.run(Shell.java:379)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
   at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5409) MRAppMaster throws InvalidStateTransitonException: Invalid event: TA_TOO_MANY_FETCH_FAILURE at KILLED for TaskAttemptImpl

2013-12-04 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839184#comment-13839184
 ] 

Gera Shegalov commented on MAPREDUCE-5409:
--

[~ozawa], [~devaraj.k], thanks for following up. 

 MRAppMaster throws InvalidStateTransitonException: Invalid event: 
 TA_TOO_MANY_FETCH_FAILURE at KILLED for TaskAttemptImpl
 -

 Key: MAPREDUCE-5409
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5409
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Affects Versions: 2.0.5-alpha
Reporter: Devaraj K
Assignee: Gera Shegalov
 Attachments: MAPREDUCE-5409.v02.patch


 {code:xml}
 2013-07-23 12:28:05,217 INFO [IPC Server handler 29 on 50796] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
 attempt_1374560536158_0003_m_40_0 is : 0.0
 2013-07-23 12:28:05,221 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures 
 for output of task attempt: attempt_1374560536158_0003_m_07_0 ... raising 
 fetch failure to map
 2013-07-23 12:28:05,222 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle 
 this event at current state for attempt_1374560536158_0003_m_07_0
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 TA_TOO_MANY_FETCH_FAILURE at KILLED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:445)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1032)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:143)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1123)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1115)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 2013-07-23 12:28:05,249 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: 
 job_1374560536158_0003Job Transitioned from RUNNING to ERROR
 2013-07-23 12:28:05,338 INFO [IPC Server handler 16 on 50796] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from 
 attempt_1374560536158_0003_m_40_0
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5632) TestRMContainerAllocator#testUpdatedNodes fails

2013-12-04 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839226#comment-13839226
 ] 

Alejandro Abdelnur commented on MAPREDUCE-5632:
---

LGTM, +1

 TestRMContainerAllocator#testUpdatedNodes fails
 ---

 Key: MAPREDUCE-5632
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5632
 Project: Hadoop Map/Reduce
  Issue Type: Test
Reporter: Ted Yu
Assignee: Jonathan Eagles
 Attachments: YARN-1420.patch


 From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/console :
 {code}
 Running org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.78 sec 
  FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 testUpdatedNodes(org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator) 
  Time elapsed: 3.125 sec   FAILURE!
 junit.framework.AssertionFailedError: null
   at junit.framework.Assert.fail(Assert.java:48)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertTrue(Assert.java:27)
   at 
 org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testUpdatedNodes(TestRMContainerAllocator.java:779)
 {code}
 This assertion fails:
 {code}
 Assert.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty());
 {code}
 The List returned by allocator.getJobUpdatedNodeEvents() is:
 [EventType: JOB_UPDATED_NODES]



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5668) Exception in thread main java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected

2013-12-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839249#comment-13839249
 ] 

Steve Loughran commented on MAPREDUCE-5668:
---

See also: http://wiki.apache.org/hadoop/InvalidJiraIssues

We aren't going to fix these problems as they aren't bugs in Hadoop, they are 
because you are running an out of date version and everytime something doesn't 
work coming to JIRA and asking for help. That help is not going to happen, all 
that will is that your issues will get ignored in future, which is dangerous if 
you ever come across a real bug *in an up to date version of Hadoop*

# upgrade to Hadoop 1.2 or 2.2
# ask the user group if you encounter problems on these versions
# if that isn't enough, look at who offers support for Hadoop and consider 
whether it is something you are prepared to pay for. If not, the source code is 
there for you to debug your problems

 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 class org.apache.hadoop.mapreduce.JobContext, but interface was expected
 -

 Key: MAPREDUCE-5668
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5668
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: ranjini

 hi
  pl help
 i have wrote this code , at runtime i got this issue.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 class org.apache.hadoop.mapreduce.JobContext, but interface was expected
   at 
 org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:170)
   at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
   at MultiFileWordCount.run(MultiFileWordCount.java:395)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
   at MultiFileWordCount.main(MultiFileWordCount.java:401)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 hduser@localhost:~$ 
 I have attached the code.
 import java.io.DataInput;  
 import java.io.DataOutput;  
 import java.io.IOException;  
 import java.util.StringTokenizer;  
 import org.apache.hadoop.conf.Configured;  
 import org.apache.hadoop.fs.FSDataInputStream;  
 import org.apache.hadoop.fs.FileSystem;  
 import org.apache.hadoop.fs.Path;  
 import org.apache.hadoop.io.IntWritable;  
 import org.apache.hadoop.io.Text;  
 import org.apache.hadoop.io.WritableComparable;  
 import org.apache.hadoop.mapreduce.InputSplit;  
 import org.apache.hadoop.mapreduce.Job;  
 import org.apache.hadoop.mapreduce.Mapper;  
 import org.apache.hadoop.mapreduce.RecordReader;  
 import org.apache.hadoop.mapreduce.TaskAttemptContext;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader;  
 import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;  
 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
 import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer;  
 import org.apache.hadoop.util.LineReader;  
 import org.apache.hadoop.util.Tool;  
 import org.apache.hadoop.util.ToolRunner;  
  /**  
   * MultiFileWordCount is an example to demonstrate the usage of   
   * MultiFileInputFormat. This examples counts the occurrences of  
   * words in the text files under the given input directory.  
   */ 
 public class MultiFileWordCount extends Configured implements Tool {  
/**  
 * This record keeps filename,offset pairs.  
 */ 
 public static class WordOffset implements WritableComparable {  
private long offset;  
private String fileName;  
   
public void readFields(DataInput in) throws IOException {  
   this.offset = in.readLong();  
   this.fileName = Text.readString(in);  
  }  
  public void write(DataOutput out) throws IOException {  
out.writeLong(offset);  
Text.writeString(out, fileName);  
  }  
   public int compareTo(Object o) {  
WordOffset that = (WordOffset)o;  
int f = this.fileName.compareTo(that.fileName);  

[jira] [Commented] (MAPREDUCE-5427) TestRMContainerAllocator.testUpdatedNodes fails on jdk7

2013-12-04 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839404#comment-13839404
 ] 

Jonathan Eagles commented on MAPREDUCE-5427:


Now that MAPREDUCE-5632 is in, the tests pass in jdk7. It looks like this jira 
is more that just fixing this test for jdk7, though. Can this jira either be 
retitled or duplicated to MAPREDUCE-5632?

 TestRMContainerAllocator.testUpdatedNodes fails on jdk7
 ---

 Key: MAPREDUCE-5427
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5427
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.1.0-beta, 2.0.5-alpha
Reporter: Nemon Lou
Assignee: Sandy Ryza
  Labels: java7
 Attachments: MAPREDUCE-5427.patch, MAPREDUCE-5427.patch


 {code}
 ---
 Test set: org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 ---
 Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 42.777 sec 
  FAILURE!
 testUpdatedNodes(org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator) 
  Time elapsed: 0.14 sec   FAILURE!
 junit.framework.AssertionFailedError: null
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertTrue(Assert.java:27)
   at 
 org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testUpdatedNodes(TestRMContainerAllocator.java:747)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (MAPREDUCE-5632) TestRMContainerAllocator#testUpdatedNodes fails

2013-12-04 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-5632:
---

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks everybody for helping to get this in.

 TestRMContainerAllocator#testUpdatedNodes fails
 ---

 Key: MAPREDUCE-5632
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5632
 Project: Hadoop Map/Reduce
  Issue Type: Test
Reporter: Ted Yu
Assignee: Jonathan Eagles
 Fix For: 3.0.0, 2.4.0

 Attachments: YARN-1420.patch


 From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/console :
 {code}
 Running org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.78 sec 
  FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 testUpdatedNodes(org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator) 
  Time elapsed: 3.125 sec   FAILURE!
 junit.framework.AssertionFailedError: null
   at junit.framework.Assert.fail(Assert.java:48)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertTrue(Assert.java:27)
   at 
 org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testUpdatedNodes(TestRMContainerAllocator.java:779)
 {code}
 This assertion fails:
 {code}
 Assert.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty());
 {code}
 The List returned by allocator.getJobUpdatedNodeEvents() is:
 [EventType: JOB_UPDATED_NODES]



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5632) TestRMContainerAllocator#testUpdatedNodes fails

2013-12-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839408#comment-13839408
 ] 

Hudson commented on MAPREDUCE-5632:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #4827 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4827/])
MAPREDUCE-5632. TestRMContainerAllocator#testUpdatedNodes fails (jeagles) 
(jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1547929)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java


 TestRMContainerAllocator#testUpdatedNodes fails
 ---

 Key: MAPREDUCE-5632
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5632
 Project: Hadoop Map/Reduce
  Issue Type: Test
Reporter: Ted Yu
Assignee: Jonathan Eagles
 Fix For: 3.0.0, 2.4.0

 Attachments: YARN-1420.patch


 From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/console :
 {code}
 Running org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.78 sec 
  FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator
 testUpdatedNodes(org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator) 
  Time elapsed: 3.125 sec   FAILURE!
 junit.framework.AssertionFailedError: null
   at junit.framework.Assert.fail(Assert.java:48)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertTrue(Assert.java:27)
   at 
 org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testUpdatedNodes(TestRMContainerAllocator.java:779)
 {code}
 This assertion fails:
 {code}
 Assert.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty());
 {code}
 The List returned by allocator.getJobUpdatedNodeEvents() is:
 [EventType: JOB_UPDATED_NODES]



--
This message was sent by Atlassian JIRA
(v6.1#6144)