Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/

[May 22, 2017 6:16:25 PM] (brahma) HDFS-11863. Document missing metrics for 
blocks count in pending IBR.
[May 22, 2017 6:39:19 PM] (brahma) HDFS-11849. JournalNode startup failure 
exception should be logged in
[May 22, 2017 9:26:13 PM] (wangda) YARN-2113. Add cross-user preemption within 
CapacityScheduler's
[May 22, 2017 9:28:55 PM] (wangda) YARN-6493. Print requested node partition in 
assignContainer logs.
[May 23, 2017 12:53:47 AM] (arp) HDFS-11866. JournalNode Sync should be off by 
default in
[May 23, 2017 3:25:34 AM] (arp) HDFS-11419. Performance analysis of new 
DFSNetworkTopology#chooseRandom.
[May 23, 2017 11:33:28 AM] (rakeshr) HDFS-11794. Add ec sub command -listCodec 
to show currently supported ec




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage 
   hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [740K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/323/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/412/

[May 22, 2017 8:40:06 AM] (sunilg) YARN-6584. Correct license headers in 
hadoop-common, hdfs, yarn and
[May 22, 2017 6:16:25 PM] (brahma) HDFS-11863. Document missing metrics for 
blocks count in pending IBR.
[May 22, 2017 6:39:19 PM] (brahma) HDFS-11849. JournalNode startup failure 
exception should be logged in
[May 22, 2017 9:26:13 PM] (wangda) YARN-2113. Add cross-user preemption within 
CapacityScheduler's
[May 22, 2017 9:28:55 PM] (wangda) YARN-6493. Print requested node partition in 
assignContainer logs.
[May 23, 2017 12:53:47 AM] (arp) HDFS-11866. JournalNode Sync should be off by 
default in
[May 23, 2017 3:25:34 AM] (arp) HDFS-11419. Performance analysis of new 
DFSNetworkTopology#chooseRandom.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   

[jira] [Resolved] (MAPREDUCE-6891) TextInputFormat: duplicate records with custom delimiter

2017-05-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Schäfer resolved MAPREDUCE-6891.
-
Resolution: Duplicate

> TextInputFormat: duplicate records with custom delimiter
> 
>
> Key: MAPREDUCE-6891
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6891
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Till Schäfer
>
> When using a custom delimiter for TextInputFormat, the resulting blocks are 
> not correct under some circumstances. It happens that the total number of 
> records is wrong and some entries are duplicated.
> I have created a reproducible test case: 
> Generate a File
> {code:bash}
> for i in $(seq 1 1000); do 
>   echo -n $i >> long_delimiter-1to1000-with_newline.txt;
>   echo "" >> 
> long_delimiter-1to1000-with_newline.txt; 
> done
> {code} 
> Java-Test to reproduce the error
> {code:java}
> public static void longDelimiterBug(JavaSparkContext sc) {
>   Configuration hadoopConf = new Configuration();
>   String delimitedFile = "long_delimiter-1to1000-with_newline.txt";
>   hadoopConf.set("textinputformat.record.delimiter", 
> "\n");
>   JavaPairRDD input = 
> sc.newAPIHadoopFile(delimitedFile, TextInputFormat.class,
>   LongWritable.class, Text.class, hadoopConf);
>   List values = input.map(t -> t._2.toString()).collect();
>   Assert.assertEquals(1000, values.size());
>   for (int i = 0; i < 1000; i++) {
>   boolean correct = values.get(i).equals(Integer.toString(i + 1));
>   if (!correct) {
>   logger.error("Wrong value for index {}: expected {} -> 
> got {}", i, i + 1, values.get(i));
>   } else {
>   logger.info("Correct value for index {}: expected {} -> 
> got {}", i, i + 1, values.get(i));
>   }
>   Assert.assertTrue(correct);
>   }
> }
> {code}
> This example fails with the error 
> {quote}
> java.lang.AssertionError: expected:<1000> but was:<10042616>
> {quote}
> when commenting out the Assert about the size of the collection, my log 
> output ends like this: 
> {quote}
> [main] INFO  edu.udo.cs.schaefer.testspark.Main  - Correct value for index 
> 663244: expected 663245 -> got 663245
> [main] ERROR edu.udo.cs.schaefer.testspark.Main  - Wrong value for index 
> 663245: expected 663246 -> got 660111
> {quote}
> After the the wrong value for index 663245 the values are sorted again an a 
> continuing with 660112, 660113, 
> The error is not reproducible with _\n_ as delimiter, i.e. when not using a 
> custom delimiter. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org