[jira] [Created] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-13601:
--

 Summary: Optimize ByteString conversions in PBHelper
 Key: HDFS-13601
 URL: https://issues.apache.org/jira/browse/HDFS-13601
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.9.1, 3.1.0
Reporter: Andrew Wang
Assignee: Andrew Wang


While doing some profiling of the NN with JMC, I saw a lot of time being spent 
on String->ByteString conversions. These are often the same strings being 
converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13600) Add toString() for RemoteMethod

2018-05-21 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HDFS-13600.
-
Resolution: Duplicate

> Add toString() for RemoteMethod
> ---
>
> Key: HDFS-13600
> URL: https://issues.apache.org/jira/browse/HDFS-13600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
>
> Saw messages like:
> {code}
> 2018-05-21 18:23:19,011 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: Invocation 
> to "XXX" for 
> "org.apache.hadoop.hdfs.server.federation.router.RemoteMethod@390c38d2" timed 
> out
> {code}
> I think {{RemoteMethod}} needs a {{toString}} method.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13600) Add toString() for RemoteMethod

2018-05-21 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13600:
---

 Summary: Add toString() for RemoteMethod
 Key: HDFS-13600
 URL: https://issues.apache.org/jira/browse/HDFS-13600
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun
Assignee: Chao Sun


Saw messages like:
{code}
2018-05-21 18:23:19,011 ERROR 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: Invocation to 
"XXX" for 
"org.apache.hadoop.hdfs.server.federation.router.RemoteMethod@390c38d2" timed 
out
{code}

I think {{RemoteMethod}} needs a {{toString}} method.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-21 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/474/

[May 20, 2018 2:42:42 PM] (sekikn) YETUS-634 maven plugin dropping 
'--batch-mode' maven argument
[May 21, 2018 10:12:34 AM] (stevel) HADOOP-15478. WASB: hflush() and hsync() 
regression. Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.cli.TestAclCLI 
   hadoop.cli.TestAclCLIWithPosixAclInheritance 
   hadoop.cli.TestCacheAdminCLI 
   hadoop.cli.TestCryptoAdminCLI 
   hadoop.cli.TestDeleteCLI 
   hadoop.cli.TestErasureCodingCLI 
   hadoop.cli.TestHDFSCLI 
   hadoop.cli.TestXAttrCLI 
   hadoop.fs.contract.hdfs.TestHDFSContractAppend 
   hadoop.fs.contract.hdfs.TestHDFSContractConcat 
   hadoop.fs.contract.hdfs.TestHDFSContractCreate 
   hadoop.fs.contract.hdfs.TestHDFSContractDelete 
   hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus 
   hadoop.fs.contract.hdfs.TestHDFSContractMkdir 
   hadoop.fs.contract.hdfs.TestHDFSContractOpen 
   hadoop.fs.contract.hdfs.TestHDFSContractPathHandle 
   hadoop.fs.contract.hdfs.TestHDFSContractRename 
   hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory 
   hadoop.fs.contract.hdfs.TestHDFSContractSeek 
   hadoop.fs.contract.hdfs.TestHDFSContractSetTimes 
   hadoop.fs.loadGenerator.TestLoadGenerator 
   hadoop.fs.permission.TestStickyBit 
   hadoop.fs.shell.TestHdfsTextCommand 
   hadoop.fs.TestEnhancedByteBufferAccess 
   hadoop.fs.TestFcHdfsCreateMkdir 
   hadoop.fs.TestFcHdfsPermission 
   hadoop.fs.TestFcHdfsSetUMask 
   hadoop.fs.TestGlobPaths 
   hadoop.fs.TestHDFSFileContextMainOperations 
   hadoop.fs.TestHdfsNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.fs.TestSWebHdfsFileContextMainOperations 
   hadoop.fs.TestSymlinkHdfsDisable 
   hadoop.fs.TestSymlinkHdfsFileContext 
   hadoop.fs.TestSymlinkHdfsFileSystem 
   hadoop.fs.TestUnbuffer 
   hadoop.fs.TestUrlStreamHandler 
   hadoop.fs.TestWebHdfsFileContextMainOperations 
   hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.fs.viewfs.TestViewFileSystemLinkFallback 
   hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash 
   hadoop.fs.viewfs.TestViewFileSystemWithAcls 
   hadoop.fs.viewfs.TestViewFileSystemWithTruncate 
   hadoop.fs.viewfs.TestViewFileSystemWithXAttrs 
   hadoop.fs.viewfs.TestViewFsAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFsDefaultValue 
   hadoop.fs.viewfs.TestViewFsFileStatusHdfs 
   hadoop.fs.viewfs.TestViewFsHdfs 
   hadoop.fs.viewfs.TestViewFsWithAcls 
   hadoop.fs.viewfs.TestViewFsWithXAttrs 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.client.impl.TestBlockReaderRemote 
   hadoop.hdfs.client.impl.TestClientBlockVerification 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer 
   hadoop.hdfs.qjournal.client.TestEpochsAreUnique 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeMXBean 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.qjournal.TestMiniJournalCluster 
   hadoop.hdfs.qjournal.TestNNWithQJM 
   hadoop.hdfs.qjournal.TestSecureNNWithQJM 
   hadoop.hdfs.security.TestDelegationToken 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.security.token.block.TestBlockToken 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup 
   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer 
   
hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
   hadoop.hdfs.server.blockmanagement.TestBlockManager 
   hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   

[jira] [Created] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-92:
--

 Summary: Use containerDBType during parsing .container files
 Key: HDDS-92
 URL: https://issues.apache.org/jira/browse/HDDS-92
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/787/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18039] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18601] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 35184] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 36053] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CreateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13089] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DatanodeBlockID$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1126] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 30491] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 15748] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16224] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23421] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$KeyValue$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1767] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16726] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListKeyRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23958] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 21216] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutSmallFileResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 33434] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ReadContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13529] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$UpdateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 15261] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$WriteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 27550] 
   Found reliance on default encoding in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]):in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]): String.getBytes() At MetadataKeyFilters.java:[line 97] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   

[jira] [Created] (HDFS-13599) 2 Datanode and 1 Region Server down

2018-05-21 Thread SH Kim (JIRA)
SH Kim created HDFS-13599:
-

 Summary: 2 Datanode and 1 Region Server down
 Key: HDFS-13599
 URL: https://issues.apache.org/jira/browse/HDFS-13599
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.7.3
 Environment: Hadoop 2.7.3.2.6.4.0-91
Subversion g...@github.com:hortonworks/hadoop.git -r 
a4b6e1c0e98b4488d38bcc0a241dcbe3538b1c4d
Compiled by jenkins on 2018-01-04T10:27Z
Compiled with protoc 2.5.0
>From source with checksum 9c28f884302610b59b221c7fbaeac3e
This command was run using 
/usr/hdp/2.6.4.0-91/hadoop/hadoop-common-2.7.3.2.6.4.0-91.jar
Reporter: SH Kim


I tried to test HA of HDFS as shutting down 2 datanode(among 3) and 1 region 
server(among 2). After shutting down, I could not access and write data at all 
using remaining 1 datanode and 1 regionserver. I like to know whether it is 
general work or I miss configured. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13598) Reduce unnecessary byte-to-string transform operation in INodesInPath#toString

2018-05-21 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13598:


 Summary: Reduce unnecessary byte-to-string transform operation in 
INodesInPath#toString
 Key: HDFS-13598
 URL: https://issues.apache.org/jira/browse/HDFS-13598
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Yiqun Lin


Every time we invoke {{INodesInPath#toString()}}, we will trigger one time 
byte-to-string transform operation.
{code:java}
private String toString(boolean vaildateObject) {
if (vaildateObject) {
  validate();
}

final StringBuilder b = new StringBuilder(getClass().getSimpleName())
.append(": path = ").append(DFSUtil.byteArray2PathString(path))
.append("\n  inodes = ");
...
}
{code}
But actually we only need to do at most one time. In {{INodesInPath}}, it has 
defined one String type variable named {{pathname}} for storing the path string 
value. Here we can use {{getPath()}} to to replace 
{{DFSUtil.byteArray2PathString(path)}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org