Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/

[Jun 12, 2017 4:42:16 PM] (templedf) HADOOP-14310. 
RolloverSignerSecretProvider.LOG should be
[Jun 12, 2017 10:07:53 PM] (jeagles) HADOOP-14501. Switch from aalto-xml to 
woodstox to handle odd XML
[Jun 12, 2017 10:18:38 PM] (arp) HDFS-11907. Add metric for time taken by 
NameNode resource check.
[Jun 12, 2017 11:03:47 PM] (arp) HDFS-11967. TestJMXGet fails occasionally. 
Contributed by Arpit Agarwal.
[Jun 13, 2017 1:45:10 AM] (szetszwo) HDFS-11947. When constructing a thread 
name, BPOfferService may print a
[Jun 13, 2017 3:43:43 AM] (arp) HADOOP-14503. Make RollingAverages a mutable 
metric. Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestReplication 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [480K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/344/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-serv

[jira] [Created] (HADOOP-14525) org.apache.hadoop.io.Text Truncate

2017-06-13 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-14525:


 Summary: org.apache.hadoop.io.Text Truncate
 Key: HADOOP-14525
 URL: https://issues.apache.org/jira/browse/HADOOP-14525
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.8.1
Reporter: BELUGA BEHR


For Apache Hive, VARCHAR fields are much slower than STRING fields when a 
precision (string length cap) is included.  Keep in mind that this precision is 
the number of UTF-8 characters in the string, not the number of bytes.

The general procedure is:

# Load an entire byte buffer into a {{Text}} object
# Convert it to a {{String}}
# Count N number of character code points
# Substring the {{String}} at the correct place
# Convert the String back into a byte array and populate the {{Text}} object

It would be great if the {{Text}} object could offer a truncate/substring 
method based on character count that did not require copying data around



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/433/

[Jun 12, 2017 12:37:43 PM] (bibinchundatt) YARN-6703. RM startup failure with 
old state store due to version
[Jun 12, 2017 4:42:16 PM] (templedf) HADOOP-14310. 
RolloverSignerSecretProvider.LOG should be
[Jun 12, 2017 10:07:53 PM] (jeagles) HADOOP-14501. Switch from aalto-xml to 
woodstox to handle odd XML
[Jun 12, 2017 10:18:38 PM] (arp) HDFS-11907. Add metric for time taken by 
NameNode resource check.
[Jun 12, 2017 11:03:47 PM] (arp) HDFS-11967. TestJMXGet fails occasionally. 
Contributed by Arpit Agarwal.
[Jun 13, 2017 1:45:10 AM] (szetszwo) HDFS-11947. When constructing a thread 
name, BPOfferService may print a
[Jun 13, 2017 3:43:43 AM] (arp) HADOOP-14503. Make RollingAverages a mutable 
metric. Contributed by




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.

[jira] [Resolved] (HADOOP-14513) A little performance improvement of HarFileSystem

2017-06-13 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-14513.
---
Resolution: Not A Problem

> A little performance improvement of HarFileSystem
> -
>
> Key: HADOOP-14513
> URL: https://issues.apache.org/jira/browse/HADOOP-14513
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Attachments: HADOOP-14513.001.patch
>
>
> In the Java source of HarFileSystem.java:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // I think p.depth() need not be loop many times, depth() is a complex 
> calculation
> for (int i=0; i< p.depth(); i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}
>  
> I think the fellow is more suitable:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // just loop once
> for (int i=0,depth=p.depth(); i< depth; i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org