[jira] [Resolved] (HDFS-10357) Ozone: Replace Jersey container with Netty Container

2018-05-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-10357.
-
Resolution: Invalid

Architecture changed completely when we pulled out code into HDDS. This part 
was completely re-written. So JIRA is invalid now.

> Ozone: Replace Jersey container with Netty Container
> 
>
> Key: HDFS-10357
> URL: https://issues.apache.org/jira/browse/HDFS-10357
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
>
> In the ozone branch, we have implemented Web Interface calls using JAX-RS. 
> This was very useful when the REST interfaces where in flux. This JIRA 
> proposes to replace Jersey based code with pure netty and remove any 
> dependency that Ozone has on Jersey. This will create both faster and simpler 
> code in Ozone web interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11139) Ozone: SCM: Handle duplicate Datanode ID

2018-05-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-11139.
-
Resolution: Won't Fix

> Ozone: SCM: Handle duplicate Datanode ID 
> -
>
> Key: HDFS-11139
> URL: https://issues.apache.org/jira/browse/HDFS-11139
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: OzonePostMerge, tocheck
> Fix For: HDFS-7240
>
>
> The Datanode ID is used when a data node registers. It is assumed that 
> datanodes are unique across the cluster. 
> However due to operator error or other cases we might encounter duplicate 
> datanode ID. SCM should be able to recognize this and handle in correctly. 
> Here is a sub-set  of datanode scenarios it needs to handle.
> 1. Normal Datanode
> 2.  Copy of a Datanode metadata by operator to another node
> 3. A Datanode being renamed - hostname change
> 4. Container Reports -- 2 machines with same datanode ID. SCM thinks they are 
> same node.
> 5. Decommission --  we decommission both nodes if IDs are same.
> 6. Commands will be send to both nodes.
> So it is necessary that SCM identity when a datanode is reusing a datanode ID 
> that is already used by another node.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12761) Ozone: Merge Ozone to trunk

2018-05-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12761.
-
Resolution: Done

> Ozone: Merge Ozone to trunk
> ---
>
> Key: HDFS-12761
> URL: https://issues.apache.org/jira/browse/HDFS-12761
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
>
> Based on the discussion in HDFS-7240, this JIRA is a place where we can 
> discuss low level code/design/architecture details of Ozone. I expect 
> comments here to spawn work items for ozone.
> cc:[~ste...@apache.org], [~cheersyang], [~linyiqun], [~yuanbo], [~xyao], 
> [~vagarychen],[~jnp], [~arpitagarwal], [~msingh], [~elek], [~nandakumar131], 
> [~szetszwo], [~ljain], [~shashikant]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-105) SCM CA: Handle CRL

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-105:
---

 Summary: SCM CA: Handle CRL
 Key: HDDS-105
 URL: https://issues.apache.org/jira/browse/HDDS-105
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-106) SCM CA: Handle CRL

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-106:
---

 Summary: SCM CA: Handle CRL
 Key: HDDS-106
 URL: https://issues.apache.org/jira/browse/HDDS-106
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-104) SCM CA: web portal to manual approve CSR

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-104:
---

 Summary: SCM CA: web portal to manual approve CSR
 Key: HDDS-104
 URL: https://issues.apache.org/jira/browse/HDDS-104
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-103) StorageContainerDatanodeProtocol for CSR and Certificate

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-103:
---

 Summary: StorageContainerDatanodeProtocol for CSR and Certificate
 Key: HDDS-103
 URL: https://issues.apache.org/jira/browse/HDDS-103
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-102:
---

 Summary: SCM CA: SCM CA server signs certificate for approved CSR
 Key: HDDS-102
 URL: https://issues.apache.org/jira/browse/HDDS-102
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Anu Engineer






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-101) SCM CA: generate CSR for SCM CA clients

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-101:
---

 Summary: SCM CA: generate CSR for SCM CA clients
 Key: HDDS-101
 URL: https://issues.apache.org/jira/browse/HDDS-101
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-100) SCM CA: generate public/private key pair for SCM/OM/DNs

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-100:
---

 Summary: SCM CA: generate public/private key pair for SCM/OM/DNs
 Key: HDDS-100
 URL: https://issues.apache.org/jira/browse/HDDS-100
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-99) Add SCM Audit log

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-99:
--

 Summary: Add SCM Audit log
 Key: HDDS-99
 URL: https://issues.apache.org/jira/browse/HDDS-99
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-98) Adding Ozone Manager Audit Log

2018-05-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-98:
--

 Summary: Adding Ozone Manager Audit Log
 Key: HDDS-98
 URL: https://issues.apache.org/jira/browse/HDDS-98
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-97) Create Version File in Datanode

2018-05-22 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-97:
--

 Summary: Create Version File in Datanode
 Key: HDDS-97
 URL: https://issues.apache.org/jira/browse/HDDS-97
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


Create a versionFile in dfs.datanode.dir/hdds/ path.

The content of the versionFile:
 # scmUuid
 # CTime
 # layOutVersion

When datanodes makes a request for SCMVersion, in this response we send scmUuid.

With this response, we should be able to create version file on the datanode.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13603) Warmup NameNode EDEK caches retries continuously if there's an invalid key

2018-05-22 Thread Antony Jay (JIRA)
Antony Jay created HDFS-13603:
-

 Summary: Warmup NameNode EDEK caches retries continuously if 
there's an invalid key 
 Key: HDFS-13603
 URL: https://issues.apache.org/jira/browse/HDFS-13603
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption, namenode
Affects Versions: 2.8.0
Reporter: Antony Jay


https://issues.apache.org/jira/browse/HDFS-9405 adds a background thread to 
pre-warm EDEK cache. 

However this fails and retries continuously if key retrieval fails for one 
encryption zone. In our usecase, we have temporarily removed keys for certain 
encryption zones.  Currently namenode and kms log is filled up with errors 
related to background thread retrying for ever .

The pre-warm thread should
 * Continue to refresh other encryption zones even if it fails for one
 * Should retry only if it fails for all encryption zones, which will be the 
case when kms is down.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13125) Improve efficiency of JN -> Standby Pipeline Under Frequent Edit Tailing

2018-05-22 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen resolved HDFS-13125.

Resolution: Duplicate

This was subsumed by HDFS-13150

> Improve efficiency of JN -> Standby Pipeline Under Frequent Edit Tailing
> 
>
> Key: HDFS-13125
> URL: https://issues.apache.org/jira/browse/HDFS-13125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> The current edit tailing pipeline is designed for
> * High resiliency
> * High throughput
> and was _not_ designed for low latency.
> It was designed under the assumption that each edit log segment would 
> typically be read all at once, e.g. on startup or the SbNN tailing the entire 
> thing after it is finalized. The ObserverNode should be reading constantly 
> from the JournalNodes' in-progress edit logs with low latency, to reduce the 
> lag time from when a transaction is committed on the ANN and when it is 
> visible on the ObserverNode.
> Due to the critical nature of this pipeline to the health of HDFS, it would 
> be better not to redesign it altogether. Based on some experiments it seems 
> if we mitigate the following issues, lag times are reduced to low levels (low 
> hundreds of milliseconds even under very high write load):
> * The overhead of creating a new HTTP connection for each time new edits are 
> fetched. This makes sense when you're expecting to tail an entire segment; it 
> does not when you may only be fetching a small number of edits. We can 
> mitigate this by allowing edits to be tailed via an RPC call, or by adding a 
> connection pool for the existing connections to the journal.
> * The overhead of transmitting a whole file at once. Right now when an edit 
> segment is requested, the JN sends the entire segment, and on the SbNN it 
> will ignore edits up to the ones it wants. How to solve this one may be more 
> tricky, but one suggestion would be to keep recently logged edits in memory, 
> avoiding the need to serve them from file at all, allowing the JN to quickly 
> serve only the required edits.
> We can implement these as optimizations on top of the existing logic, with 
> fallbacks to the current slow-but-resilient pipeline.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-96) Add an option in ozone script to generate a site file with minimally required ozone configs

2018-05-22 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-96:
-

 Summary: Add an option in ozone script to generate a site file 
with minimally required ozone configs
 Key: HDDS-96
 URL: https://issues.apache.org/jira/browse/HDDS-96
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Users must be able to execute a command like 'ozone genconf -output /path'

Such an option must generate an oozie-site.xml file with minimally required 
ozone configs in the specified path.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13602) Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations

2018-05-22 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13602:
--

 Summary: Optimize checkOperation(WRITE) check in FSNamesystem 
getBlockLocations
 Key: HDFS-13602
 URL: https://issues.apache.org/jira/browse/HDFS-13602
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Erik Krogen
Assignee: Chao Sun


Similar to the work done in HDFS-4591 to avoid having to take a write lock 
before checking if an operation category is allowed, we can do the same for the 
write lock that is taken sometimes (when updating access time) within 
getBlockLocations.

This is particularly useful when using the standby read feature (HDFS-12943), 
as it will be the case on an observer node that the operationCategory(READ) 
check succeeds but the operationCategory(WRITE) check fails. It would be ideal 
to fail this check _before_ acquiring the write lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-22 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/475/

[May 21, 2018 1:36:26 PM] (msingh) HDDS-57. 
TestContainerCloser#testRepeatedClose and
[May 21, 2018 3:01:51 PM] (Bharat) HDDS-87:Fix test failures with uninitialized 
storageLocation field in
[May 21, 2018 3:10:41 PM] (haibochen) YARN-8248. Job hangs when a job requests 
a resource that its queue does
[May 21, 2018 5:33:00 PM] (shahrs87) Skip the proxy user check if the ugi has 
not been initialized.
[May 21, 2018 5:38:20 PM] (msingh) HDDS-71. Send ContainerType to Datanode 
during container creation.
[May 21, 2018 8:14:58 PM] (ericp) YARN-8179: Preemption does not happen due to 
natural_termination_factor
[May 21, 2018 11:09:24 PM] (xyao) HDDS-82. Merge ContainerData and 
ContainerStatus classes. Contributed by
[May 22, 2018 8:03:31 AM] (msingh) HADOOP-15474. Rename properties introduced 
for . Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.cli.TestAclCLI 
   hadoop.cli.TestAclCLIWithPosixAclInheritance 
   hadoop.cli.TestCacheAdminCLI 
   hadoop.cli.TestCryptoAdminCLI 
   hadoop.cli.TestDeleteCLI 
   hadoop.cli.TestErasureCodingCLI 
   hadoop.cli.TestHDFSCLI 
   hadoop.cli.TestXAttrCLI 
   hadoop.fs.contract.hdfs.TestHDFSContractAppend 
   hadoop.fs.contract.hdfs.TestHDFSContractConcat 
   hadoop.fs.contract.hdfs.TestHDFSContractCreate 
   hadoop.fs.contract.hdfs.TestHDFSContractDelete 
   hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus 
   hadoop.fs.contract.hdfs.TestHDFSContractMkdir 
   hadoop.fs.contract.hdfs.TestHDFSContractOpen 
   hadoop.fs.contract.hdfs.TestHDFSContractPathHandle 
   hadoop.fs.contract.hdfs.TestHDFSContractRename 
   hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory 
   hadoop.fs.contract.hdfs.TestHDFSContractSeek 
   hadoop.fs.contract.hdfs.TestHDFSContractSetTimes 
   hadoop.fs.loadGenerator.TestLoadGenerator 
   hadoop.fs.permission.TestStickyBit 
   hadoop.fs.shell.TestHdfsTextCommand 
   hadoop.fs.TestEnhancedByteBufferAccess 
   hadoop.fs.TestFcHdfsCreateMkdir 
   hadoop.fs.TestFcHdfsPermission 
   hadoop.fs.TestFcHdfsSetUMask 
   hadoop.fs.TestGlobPaths 
   hadoop.fs.TestHDFSFileContextMainOperations 
   hadoop.fs.TestHdfsNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.fs.TestSWebHdfsFileContextMainOperations 
   hadoop.fs.TestSymlinkHdfsDisable 
   hadoop.fs.TestSymlinkHdfsFileContext 
   hadoop.fs.TestSymlinkHdfsFileSystem 
   hadoop.fs.TestUnbuffer 
   hadoop.fs.TestUrlStreamHandler 
   hadoop.fs.TestWebHdfsFileContextMainOperations 
   hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.fs.viewfs.TestViewFileSystemLinkFallback 
   hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash 
   hadoop.fs.viewfs.TestViewFileSystemWithAcls 
   hadoop.fs.viewfs.TestViewFileSystemWithTruncate 
   hadoop.fs.viewfs.TestViewFileSystemWithXAttrs 
   hadoop.fs.viewfs.TestViewFsAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFsDefaultValue 
   hadoop.fs.viewfs.TestViewFsFileStatusHdfs 
   hadoop.fs.viewfs.TestViewFsHdfs 
   hadoop.fs.viewfs.TestViewFsWithAcls 
   hadoop.fs.viewfs.TestViewFsWithXAttrs 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.client.impl.TestBlockReaderRemote 
   hadoop.hdfs.client.impl.TestClientBlockVerification 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer 
   hadoop.hdfs.qjournal.client.TestEpochsAreUnique 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeMXBean 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.qjournal.TestMiniJournalCluster 
   hadoop.hdfs.qjournal.TestNNWithQJM 
   hadoop.hdfs.qjournal.TestSecureNNWithQJM 
   hadoop.hdfs.security.TestDelegationToken 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.security.token.block.TestBlockToken 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer 
   hadoop.hdfs.server.balancer.TestBalance

[jira] [Created] (HDDS-95) Shade the hadoop-ozone/objectstore-service project

2018-05-22 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-95:


 Summary: Shade the hadoop-ozone/objectstore-service project
 Key: HDDS-95
 URL: https://issues.apache.org/jira/browse/HDDS-95
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Elek, Marton


Ozone has datanode plugin (hadoop-hdds/container-service) which is activated as 
a datanode service plugin 
(dfs.datanode.plugins=org.apache.hadoop.ozone.HddsDatanodeService)

Also HddsDatanodeService plugin will use hadoop-ozone/object-store service 
component (configured by 
hdds.datanode.plugins=org.apache.hadoop.ozone.web.OzoneHddsDatanodeService).

The goal is to shade all the required classes two one jar file 
(objectstore-service + all the dependencies). If the jar will be added to the 
classpath of any hadoop cluster (let's say 3.0 or 3.1) it should be started 
without any conflict in the classes (ozone uses hadoop trunk where hadoop 
common could be newer). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin

2018-05-22 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-94:


 Summary: Change ozone datanode command to start the standalone 
datanode plugin
 Key: HDDS-94
 URL: https://issues.apache.org/jira/browse/HDDS-94
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
 Fix For: 0.2.1


The current ozone datanode command starts the regular hdfs datanode with an 
enabled HddsDatanodeService as a datanode plugin.

The goal is to start only the HddsDatanodeService.java (main function is 
already there but GenericOptionParser should be adopted). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/788/

[May 21, 2018 10:12:34 AM] (stevel) HADOOP-15478. WASB: hflush() and hsync() 
regression. Contributed by
[May 21, 2018 1:36:26 PM] (msingh) HDDS-57. 
TestContainerCloser#testRepeatedClose and
[May 21, 2018 3:01:51 PM] (Bharat) HDDS-87:Fix test failures with uninitialized 
storageLocation field in
[May 21, 2018 3:10:41 PM] (haibochen) YARN-8248. Job hangs when a job requests 
a resource that its queue does
[May 21, 2018 5:33:00 PM] (shahrs87) Skip the proxy user check if the ugi has 
not been initialized.
[May 21, 2018 5:38:20 PM] (msingh) HDDS-71. Send ContainerType to Datanode 
during container creation.
[May 21, 2018 8:14:58 PM] (ericp) YARN-8179: Preemption does not happen due to 
natural_termination_factor
[May 21, 2018 11:09:24 PM] (xyao) HDDS-82. Merge ContainerData and 
ContainerStatus classes. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18391] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18953] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 35536] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 36405] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CreateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13441] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DatanodeBlockID$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1216] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 30843] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16100] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16576] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23773] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$KeyValue$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1857] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 17078] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListKeyRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 24310] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 21568] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutSmallFileResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 33786] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ReadContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13881] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datan

[jira] [Created] (HDDS-93) Refactor Standalone pipeline protocols for better managebility.

2018-05-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-93:
-

 Summary: Refactor Standalone pipeline protocols for better 
managebility.
 Key: HDDS-93
 URL: https://issues.apache.org/jira/browse/HDDS-93
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, Ozone Datanode
Reporter: Mukul Kumar Singh


Currently, the standalone protocol is uses XceiverClient to create a connection 
with the datanode. This Class need to be renamed to something like 
XceiverClientStandAlone and also the same needs to be renamed to 
XceiverServerStandAlone. 

Also with HDDS-49, Standalone protocol will add support for Grpc as a transport 
layer. With this, the Client needs to be modified to allow seamless switch 
between Netty and Grpc transport protocols.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org