[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908596#comment-16908596
 ] 

Hadoop QA commented on MAPREDUCE-7082:
--------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
44s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
9s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | MAPREDUCE-7082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919605/MAPREDUCE_7082.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 971e87e2fcbd 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5882cf9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/7657/testReport/ |
| Max. process+thread count | 1601 (vs. ulimit of 10000) |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/7657/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix FileInputFormat throw java.lang.ArrayIndexOutOfBoundsException(0)
> ---------------------------------------------------------------------
>
>                 Key: MAPREDUCE-7082
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7082
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv1
>    Affects Versions: 2.7.1
>         Environment: CentOS 7
> Hive 1.2.1
> Hadoop 2.7.1
>            Reporter: tartarus
>            Assignee: tartarus
>            Priority: Major
>         Attachments: MAPREDUCE_7082.001.patch, MAPREDUCE_7082.patch
>
>
> when hdfs is miss block and then MR is create split with FileInputFormat
> then will throw ArrayIndexOutOfBoundsException like this
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.mapred.FileInputFormat.identifyHosts(FileInputFormat.java:708)
> at 
> org.apache.hadoop.mapred.FileInputFormat.getSplitHostsAndCachedHosts(FileInputFormat.java:675)
> at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:365)
> at 
> com.hadoop.mapred.DeprecatedLzoTextInputFormat.getSplits(DeprecatedLzoTextInputFormat.java:129)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:305)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:407)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:408)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:363)
> at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:355)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:231)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
> {code}
> part code of method {color:#d04437}getSplits(JobConf job, int 
> numSplits){color} :
> {code:java}
> if (isSplitable(fs, path)) {
>   long blockSize = file.getBlockSize();
>   long splitSize = computeSplitSize(goalSize, minSize, blockSize);
>   long bytesRemaining = length;
>   while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {
>     String[][] splitHosts = getSplitHostsAndCachedHosts(blkLocations,
>         length-bytesRemaining, splitSize, clusterMap);
>     splits.add(makeSplit(path, length-bytesRemaining, splitSize,
>         splitHosts[0], splitHosts[1]));
>     bytesRemaining -= splitSize;
>   }
>   if (bytesRemaining != 0) {
>     String[][] splitHosts = getSplitHostsAndCachedHosts(blkLocations, length
>         - bytesRemaining, bytesRemaining, clusterMap);
>     splits.add(makeSplit(path, length - bytesRemaining, bytesRemaining,
>         splitHosts[0], splitHosts[1]));
>   }
> } else {
>   if (LOG.isDebugEnabled()) {
>     // Log only if the file is big enough to be splitted
>     if (length > Math.min(file.getBlockSize(), minSize)) {
>       LOG.debug("File is not splittable so no parallelization "
>           + "is possible: " + file.getPath());
>     }
>   }
>   String[][] splitHosts = 
> getSplitHostsAndCachedHosts(blkLocations,0,length,clusterMap);
>   splits.add(makeSplit(path, 0, length, splitHosts[0], splitHosts[1]));
> }
> {code}
> part code of method 
> {color:#d04437}getSplitHostsAndCachedHosts(BlockLocation[] blkLocations, 
> {color}
>  {color:#d04437} long offset, long splitSize, NetworkTopology 
> clusterMap){color} : 
> {code:java}
> allTopos = blkLocations[index].getTopologyPaths();
> // If no topology information is available, just
> // prefix a fakeRack
> if (allTopos.length == 0) {
>   allTopos = fakeRacks(blkLocations, index);
> }
> ...
> return new String[][] { identifyHosts(allTopos.length, racksMap),
>     new String[0]};
> {code}
> part code of method{color:#d04437} identifyHosts(int replicationFactor, 
> Map<Node,NodeInfo> racksMap) :{color}
> {code:java}
> String [] retVal = new String[replicationFactor];
> ...
> retVal[index++] = host.node.getName().split(":")[0];{code}
> because the  {color:#d04437}blkLocations[index].getTopologyPaths(){color} is 
> empty and {color:#d04437}blkLocations[index].getHosts(){color} is empty too, 
> so {color:#d04437}replicationFactor is 0{color} , then execute 
> {code:java}
> retVal[index++] = host.node.getName().split(":")[0];{code}
> will throw ArrayIndexOutOfBoundsException(0)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to