[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573008#comment-14573008
 ] 

Hudson commented on MAPREDUCE-5965:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #216 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/216/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java
* hadoop-mapreduce-project/CHANGES.txt


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573036#comment-14573036
 ] 

Hudson commented on MAPREDUCE-5965:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2164 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2164/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572891#comment-14572891
 ] 

Hudson commented on MAPREDUCE-5965:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #207 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/207/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572863#comment-14572863
 ] 

Hudson commented on MAPREDUCE-5965:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2146 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2146/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
* hadoop-mapreduce-project/CHANGES.txt


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572593#comment-14572593
 ] 

Hudson commented on MAPREDUCE-5965:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #218 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/218/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572614#comment-14572614
 ] 

Hudson commented on MAPREDUCE-5965:
---

FAILURE: Integrated in Hadoop-Yarn-trunk #948 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/948/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
* hadoop-mapreduce-project/CHANGES.txt
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-03 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571683#comment-14571683
 ] 

Robert Kanter commented on MAPREDUCE-5965:
--

+1 LGTM.

Will commit this later today if nobody has any other comments.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: java.io.IOException: error=7, Argument list too long at 
 java.lang.UNIXProcess.forkAndExec(Native Method) at 
 java.lang.UNIXProcess.init(UNIXProcess.java:135) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571969#comment-14571969
 ] 

Hudson commented on MAPREDUCE-5965:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7959 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7959/])
MAPREDUCE-5965. Hadoop streaming throws error if list of input files is high. 
Error is: error=7, Argument list too long at if number of input file is high 
(wilfreds via rkanter) (rkanter: rev cc70df98e74142331043a611a3bd8a53ff6a2242)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/PipeMapRed.java
* hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-28 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563932#comment-14563932
 ] 

Wilfred Spiegelenburg commented on MAPREDUCE-5965:
--

Can someone please review the latest patch and let me know if it is OK?

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: java.io.IOException: error=7, Argument list too long at 
 java.lang.UNIXProcess.forkAndExec(Native Method) at 
 java.lang.UNIXProcess.init(UNIXProcess.java:135) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558148#comment-14558148
 ] 

Hadoop QA commented on MAPREDUCE-5965:
--

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 57s | Site still builds. |
| {color:green}+1{color} | checkstyle |   0m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 38s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   6m  7s | Tests passed in 
hadoop-streaming. |
| | |  47m 30s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735146/MAPREDUCE-5965.3.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / ada233b |
| hadoop-streaming test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5753/artifact/patchprocess/testrun_hadoop-streaming.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5753/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5753/console |


This message was automatically generated.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.3.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-22 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555957#comment-14555957
 ] 

Junping Du commented on MAPREDUCE-5965:
---

Thanks guys for good discussions here. +1 on the overall solution here. Agree 
that we don't need to put new streaming configuration to *-default.xml as 
previous practices. 

bq. If you really want to make it configurable the easiest way would be to roll 
the two settings in one. We could make the stream.truncate.long.jobconf.values 
an integer: -1 do not truncate otherwise truncate at the length given.
That sounds better. May be we should rename 
stream.truncate.long.jobconf.values to something like: 
stream.jobconf.truncate.limit and document somewhere to say -1 is the default 
value which doesn't do any truncate and 20K is a proper value for most cases?

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-22 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556415#comment-14556415
 ] 

Ray Chiang commented on MAPREDUCE-5965:
---

Thanks for the clarification.  I'm still getting used to the non-core Hadoop 
parts and how those need or don't need to conform.

+1 for the suggested property name and usage better.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: java.io.IOException: error=7, Argument list too long at 
 java.lang.UNIXProcess.forkAndExec(Native Method) 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-21 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14554197#comment-14554197
 ] 

Wilfred Spiegelenburg commented on MAPREDUCE-5965:
--

[~amalakar] thank you for the assignment. The comment should be added back, 
I'll do that with an updated patch. The move to keep it in the same method was 
to make the change as simple as possible.

[~rchiang] 
The streaming configuration does not really have a *-default.xml file. There is 
documentation (markdown) that shows some of the settings and options: adding it 
to the FAQ would probably be the correct place. There is a help that is printed 
in the main StreamJob code that shows most of the options. I will update the 
two files and explain the setting that is available. I can upload a new patch 
with that added before I do lets get the other points finalised.

A white list or black list is possible but what would we exclude or include? In 
the job configuration there could be any value which could be too long, a user 
could set something he wants. It will be really difficult to filter that 
consistently and be sure that we have a fix with limited impact.

Making the lenLimit configurable is possible. However I do not see what we 
would win with making the length configurable. The data is not used anywhere 
and lowering or increasing the size at which we cut it off will not give us 
anything extra. If you really want to make it configurable the easiest way 
would be to roll the two settings in one. We could make the 
stream.truncate.long.jobconf.values an integer: -1 do not truncate otherwise 
truncate at the length given.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Wilfred Spiegelenburg
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-20 Thread Arup Malakar (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553180#comment-14553180
 ] 

Arup Malakar commented on MAPREDUCE-5965:
-

[~wilfreds] sure. Just a comment in the patch I had submitted the check was 
inside a separate function with some comment on why we want to do it. As can be 
seen in: 
https://issues.apache.org/jira/secure/attachment/12696883/MAPREDUCE-5965.1.patch
 Is there a reason to remove those?

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547929#comment-14547929
 ] 

Hadoop QA commented on MAPREDUCE-5965:
--

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 10s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 42s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   6m 14s | Tests passed in 
hadoop-streaming. |
| | |  42m 37s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733519/MAPREDUCE-5965.2.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 363c355 |
| hadoop-streaming test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5741/artifact/patchprocess/testrun_hadoop-streaming.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5741/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5741/console |


This message was automatically generated.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-18 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549114#comment-14549114
 ] 

Ray Chiang commented on MAPREDUCE-5965:
---

Thanks Wilfred.  I guess I'll comment on the meta issue first.  In general, I'm 
not sure whether it's  a good idea to filter based purely on size.  Would it 
better to have a more firm whitelist and/or blacklist capability for Hadoop 
streaming?

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: java.io.IOException: error=7, Argument list too long 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-18 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549128#comment-14549128
 ] 

Ray Chiang commented on MAPREDUCE-5965:
---

Making these comments assuming the current patch is an acceptable design 
approach, I have the following nitpicks:

1) Can stream.truncate.long.jobconf.values be put in the appropriate 
*-default.xml file for documentation purposes?

2) Can the lenLimit correspond to a Configuration variable?


 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-05-18 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549661#comment-14549661
 ] 

Wilfred Spiegelenburg commented on MAPREDUCE-5965:
--

Arup: Do you mind if I assign the jira to me? Would like to get this fixed in 
an upcoming release.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.2.patch, 
 MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 
 more Caused by: java.io.IOException: error=7, Argument list too long at 
 java.lang.UNIXProcess.forkAndExec(Native Method) at 
 java.lang.UNIXProcess.init(UNIXProcess.java:135) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308092#comment-14308092
 ] 

Hadoop QA commented on MAPREDUCE-5965:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12696883/MAPREDUCE-5965.1.patch
  against trunk revision e1990ab.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5168//console

This message is automatically generated.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.1.patch, MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307851#comment-14307851
 ] 

Hadoop QA commented on MAPREDUCE-5965:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655256/MAPREDUCE-5965.patch
  against trunk revision 276485e.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5165//console

This message is automatically generated.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 
 more Caused by: java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 17 more Caused by: java.lang.RuntimeException: configuration exception at 
 org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at 
 org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 
 more Caused by: java.io.IOException: Cannot run program 
 /data/hadoop/hadoop-yarn/cache/yarn/nm-local-dir/usercache/oo-analytics/appcache/application_1403599726264_13177/container_1403599726264_13177_01_06/./rbenv_runner.sh:
  error=7, Argument list too long at 
 java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) at 
 

[jira] [Commented] (MAPREDUCE-5965) Hadoop streaming throws error if list of input files is high. Error is: error=7, Argument list too long at if number of input file is high

2014-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059187#comment-14059187
 ] 

Hadoop QA commented on MAPREDUCE-5965:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655256/MAPREDUCE-5965.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-tools/hadoop-streaming:

org.apache.hadoop.streaming.TestFileArgs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4728//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4728//console

This message is automatically generated.

 Hadoop streaming throws error if list of input files is high. Error is: 
 error=7, Argument list too long at if number of input file is high
 

 Key: MAPREDUCE-5965
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5965
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Arup Malakar
Assignee: Arup Malakar
 Attachments: MAPREDUCE-5965.patch


 Hadoop streaming exposes all the key values in job conf as environment 
 variables when it forks a process for streaming code to run. Unfortunately 
 the variable mapreduce_input_fileinputformat_inputdir contains the list of 
 input files, and Linux has a limit on size of environment variables + 
 arguments.
 Based on how long the list of files and their full path is this could be 
 pretty huge. And given all of these variables are not even used it stops user 
 from running hadoop job with large number of files, even though it could be 
 run.
 Linux throws E2BIG if the size is greater than certain size which is error 
 code 7. And java translates that to error=7, Argument list too long. More: 
 http://man7.org/linux/man-pages/man2/execve.2.html I suggest skipping 
 variables if it is greater than certain length. That way if user code 
 requires the environment variable it would fail. It should also introduce a 
 config variable to skip long variables, and set it to false by default. That 
 way user has to specifically set it to true to invoke this feature.
 Here is the exception:
 {code}
 Error: java.lang.RuntimeException: Error in configuring object at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at 
 org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.lang.reflect.InvocationTargetException at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606) at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
 ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object 
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at