[jira] [Updated] (HDFS-14229) Nonblocking HDFS create|write

2019-01-24 Thread Zheng Shao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Shao updated HDFS-14229:
--
Description: 
Right now, the create call on HDFS is blocking.  The write call can also be 
blocking if the write buffer reached its limit.

However, for most applications, the only requirement is that when "close" on a 
file is called, the file is persisted and visible in HDFS.  There is no need to 
make "create" visible right after the "create" call returns.

A particular use case of this is to use HDFS as a place to store shuffle data 
(in Spark, Map-Reduce, or other loose-coupled applications).

 

This Jira proposes that we add a new "async-hdfs://" protocol that maps to a 
new AsyncDistributedFileSystem class, whose create call is nonblocking but 
still returns a FSOutputStream that is never blocked on write (even when the 
file has not been physically created on HDFS yet).  The close call on the 
FSOutputStream will block until the creation and all previous writes are 
completed and the file is closed.

 

Note that this Jira is related to 
https://issues.apache.org/jira/browse/HDFS-9924 but not the same.  HDFS-9924 
talks about async rename etc.  This Jira talks about async create|write. 

  was:
Right now, the create call on HDFS is blocking.  The write call can also be 
blocking if the write buffer reached its limit.

However, for most applications, the only requirement is that when "close" on a 
file is called, the file is persisted and visible in HDFS.  There is no need to 
make "create" visible right after the "create" call returns.

A particular use case of this is to use HDFS as a place to store shuffle data 
(in Spark, Map-Reduce, or other loose-coupled applications).

 

This Jira proposes that we add a new "async-hdfs://" protocol that maps to a 
new AsyncDistributedFileSystem class, whose create call is nonblocking but 
still returns a FSOutputStream that is never blocked on write (even when the 
file has not been physically created on HDFS yet).  The close call on the 
FSOutputStream will block until the creation and all previous writes are 
completed and the file is closed.

 


> Nonblocking HDFS create|write
> -
>
> Key: HDFS-14229
> URL: https://issues.apache.org/jira/browse/HDFS-14229
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Zheng Shao
>Priority: Major
>
> Right now, the create call on HDFS is blocking.  The write call can also be 
> blocking if the write buffer reached its limit.
> However, for most applications, the only requirement is that when "close" on 
> a file is called, the file is persisted and visible in HDFS.  There is no 
> need to make "create" visible right after the "create" call returns.
> A particular use case of this is to use HDFS as a place to store shuffle data 
> (in Spark, Map-Reduce, or other loose-coupled applications).
>  
> This Jira proposes that we add a new "async-hdfs://" protocol that maps to a 
> new AsyncDistributedFileSystem class, whose create call is nonblocking but 
> still returns a FSOutputStream that is never blocked on write (even when the 
> file has not been physically created on HDFS yet).  The close call on the 
> FSOutputStream will block until the creation and all previous writes are 
> completed and the file is closed.
>  
> Note that this Jira is related to 
> https://issues.apache.org/jira/browse/HDFS-9924 but not the same.  HDFS-9924 
> talks about async rename etc.  This Jira talks about async create|write. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14229) Nonblocking HDFS create|write

2019-01-24 Thread Zheng Shao (JIRA)
Zheng Shao created HDFS-14229:
-

 Summary: Nonblocking HDFS create|write
 Key: HDFS-14229
 URL: https://issues.apache.org/jira/browse/HDFS-14229
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zheng Shao


Right now, the create call on HDFS is blocking.  The write call can also be 
blocking if the write buffer reached its limit.

However, for most applications, the only requirement is that when "close" on a 
file is called, the file is persisted and visible in HDFS.  There is no need to 
make "create" visible right after the "create" call returns.

A particular use case of this is to use HDFS as a place to store shuffle data 
(in Spark, Map-Reduce, or other loose-coupled applications).

 

This Jira proposes that we add a new "async-hdfs://" protocol that maps to a 
new AsyncDistributedFileSystem class, whose create call is nonblocking but 
still returns a FSOutputStream that is never blocked on write (even when the 
file has not been physically created on HDFS yet).  The close call on the 
FSOutputStream will block until the creation and all previous writes are 
completed and the file is closed.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-24 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752002#comment-16752002
 ] 

Fengnan Li edited comment on HDFS-14118 at 1/25/19 7:45 AM:


[~csun] Thanks for the review
for 1, the current resolving logic is put into the helper that is called inside 
the getProxyAddress, so if ObserverReadProxyProvider(or other proxy provider) 
wants to inherit this function, it can take the resolving logic automatically. 
I did the current way to avoid code duplicate. 
for 2, this is a good idea. I can drop the factory this way and load the class 
from conf.


was (Author: fengnanli):
[~csun] Thanks for the review
for 1, the current resolving logic is put into the helper that is called inside 
the getProxyAddress, so if ObserverReadProxyProvider(or other proxy provider) 
wants to inherit this function, it can take the resolving logic automatically. 
I did the current way to avoid code duplicate. 
for 2, this is a good idea. I can drop the factory this way and loads the class 
from conf.

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14118.001.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-24 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752002#comment-16752002
 ] 

Fengnan Li commented on HDFS-14118:
---

[~csun] Thanks for the review
for 1, the current resolving logic is put into the helper that is called inside 
the getProxyAddress, so if ObserverReadProxyProvider(or other proxy provider) 
wants to inherit this function, it can take the resolving logic automatically. 
I did the current way to avoid code duplicate. 
for 2, this is a good idea. I can drop the factory this way and loads the class 
from conf.

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14118.001.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751996#comment-16751996
 ] 

Xiaoyu Yao commented on HDDS-991:
-

[~ajayydv], can you rebase the patch as it does not apply any more after 
HDDS-793. Thanks!

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch, HDDS-991.02.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-24 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751991#comment-16751991
 ] 

Ranith Sardar commented on HDFS-14202:
--

[~anu] , please review it once.

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-793) Support custom key/value annotations on volume/bucket/key

2019-01-24 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751990#comment-16751990
 ] 

Xiaoyu Yao edited comment on HDDS-793 at 1/25/19 7:27 AM:
--

Thanks [~elek] for the update. Patch v4 LGTM, +1. I've commit the patch v4 to 
trunk. 


was (Author: xyao):
Thanks [~elek] for the contribution. I've commit the patch v4 to trunk. 

> Support custom key/value annotations on volume/bucket/key
> -
>
> Key: HDDS-793
> URL: https://issues.apache.org/jira/browse/HDDS-793
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-793.001.patch, HDDS-793.002.patch, 
> HDDS-793.003.patch, HDDS-793.004.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I propose to add a custom Map annotation field to 
> objects/buckets and keys in Ozone Manager.
> It would enable to build any extended functionality on top of the OM's 
> generic interface. For example:
>  * Support tags in Ozone S3 gateway 
> (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETtagging.html)
>  * Support md5 based ETags in s3g
>  * Store s3 related authorization data (ACLs, policies) together with the 
> parent objects
> As an optional feature (could be implemented later) the client can defined 
> the exposed annotations. For example s3g can defined which annotations should 
> be read from rocksdb on OM side and sent the the client (s3g)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-793) Support custom key/value annotations on volume/bucket/key

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751992#comment-16751992
 ] 

Hudson commented on HDDS-793:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15827 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15827/])
HDDS-793. Support custom key/value annotations on volume/bucket/key. (xyao: rev 
9fc7df8afbc54db36b526310268a23627607af37)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBCSID.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/WithMetadata.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/PutKeyHandler.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneKeyDetails.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/package-info.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rest/TestOzoneRestClient.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmBucketInfo.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/KeyValueUtil.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectHead.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmKeyInfo.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestSecureOzoneRpcClient.java


> Support custom key/value annotations on volume/bucket/key
> 

[jira] [Updated] (HDDS-793) Support custom key/value annotations on volume/bucket/key

2019-01-24 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-793:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~elek] for the contribution. I've commit the patch v4 to trunk. 

> Support custom key/value annotations on volume/bucket/key
> -
>
> Key: HDDS-793
> URL: https://issues.apache.org/jira/browse/HDDS-793
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-793.001.patch, HDDS-793.002.patch, 
> HDDS-793.003.patch, HDDS-793.004.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I propose to add a custom Map annotation field to 
> objects/buckets and keys in Ozone Manager.
> It would enable to build any extended functionality on top of the OM's 
> generic interface. For example:
>  * Support tags in Ozone S3 gateway 
> (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETtagging.html)
>  * Support md5 based ETags in s3g
>  * Store s3 related authorization data (ACLs, policies) together with the 
> parent objects
> As an optional feature (could be implemented later) the client can defined 
> the exposed annotations. For example s3g can defined which annotations should 
> be read from rocksdb on OM side and sent the the client (s3g)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-24 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14202:
-
Attachment: HDFS-14202.003.patch

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1009:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~dineshchitlangia] for the contribution and [~bharatviswa] for the 
review. Committed it to trunk.

> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
>  in HDDS-1007



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-973) HDDS/Ozone fail to build on Windows

2019-01-24 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-973:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

+1. Thanks [~xyao] for the contribution and [~Sammi] for reporting it. Thanks 
to [~linyiqun] and [~elek] for the review. Committed it to trunk.

> HDDS/Ozone fail to build on Windows
> ---
>
> Key: HDDS-973
> URL: https://issues.apache.org/jira/browse/HDDS-973
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-973.001.patch
>
>
> Thanks [~Sammi] for reporting the issue on building hdds/ozone with Windows 
> OS. I can repro it locally and will post a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-990) Typos in Ozone doc

2019-01-24 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-990:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~adoroszlai] for the contribution and [~anu] for the review. Committed 
it to trunk.

> Typos in Ozone doc
> --
>
> Key: HDDS-990
> URL: https://issues.apache.org/jira/browse/HDDS-990
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: HDDS-990.001.patch, HDDS-990.002.patch, 
> HDDS-990.003.patch
>
>
> Fix the following issues in {{hadoop-hdds/docs/content}}:
>  # {{bucket delete}} description and example references volume instead
>  # {{compose/ozone}} doesn't launch Namenode, only {{compose/ozone-hdfs}} does
>  # Java API example doesn't compile:
>  #* use regular quotes instead of "word-processor" ones
>  #* typo in variable and class names
>  # {{delete key}} -> {{key delete}}
>  # various other typos



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-906) Display the ozone version on SCM/OM web ui instead of Hadoop version

2019-01-24 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-906:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

+1. Thanks [~adoroszlai] for the contribution, [~elek] for reporting the issue 
and [~anu] for the review. Committed it to trunk.

> Display the ozone version on SCM/OM web ui instead of Hadoop version
> 
>
> Key: HDDS-906
> URL: https://issues.apache.org/jira/browse/HDDS-906
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM, SCM
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: newbie
> Fix For: 0.4.0
>
> Attachments: HDDS-906.001.patch, HDDS-906.002.patch, 
> HDDS-906.003.patch, HDDS-906.004.patch
>
>
> SCM and OM web uis (http://localhost:9876 and http://localhost:9874) display 
> the actual version but the displayed version is the version of the hadoop 
> dependencies:
> This is provided by the org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl 
> which is a default implementation of ServiceRuntimeInfo. (Both OzoneManager 
> and StorageContainerManager extend this class).
> We need to use OzoneVersionInfo and HddsVersionInfo classes to display the 
> actual version instead of org.apache.hadoop.util.VersionInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1007:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

+1. Thanks [~dineshchitlangia] for the contribution and [~anu] for the review. 
Committed this to trunk.

> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1006) AuditParser assumes incorrect log format

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751958#comment-16751958
 ] 

Hudson commented on HDDS-1006:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15826/])
HDDS-1006. AuditParser assumes incorrect log format. Contributed by 
(nandakumar131: rev c6d901af77efd7d0bdea7a0258932eac627a4b09)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/common/DatabaseHelper.java
* (edit) hadoop-ozone/tools/src/test/resources/testaudit.log


> AuditParser assumes incorrect log format
> 
>
> Key: HDDS-1006
> URL: https://issues.apache.org/jira/browse/HDDS-1006
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1006.00.patch
>
>
> While creating AuditParser, I had mistakenly used incorrect test sample to 
> verify.
> Thus, due to improper column position the auditparser would yield incorrect 
> query results for columns Result, Exception and Params.
> I encountered this issue while trying to write a robot test for 
> auditparser(patch to follow soon).
> This jira aims to fix this issue and sample test data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-906) Display the ozone version on SCM/OM web ui instead of Hadoop version

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751960#comment-16751960
 ] 

Hudson commented on HDDS-906:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15826/])
HDDS-906. Display the ozone version on SCM/OM web ui instead of Hadoop 
(nandakumar131: rev 45c4cfe790bd6d7962698555c634a42b38c6cad1)
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServiceRuntimeInfoImpl.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/OzoneVersionInfo.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/VersionInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/HddsVersionInfo.java


> Display the ozone version on SCM/OM web ui instead of Hadoop version
> 
>
> Key: HDDS-906
> URL: https://issues.apache.org/jira/browse/HDDS-906
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM, SCM
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-906.001.patch, HDDS-906.002.patch, 
> HDDS-906.003.patch, HDDS-906.004.patch
>
>
> SCM and OM web uis (http://localhost:9876 and http://localhost:9874) display 
> the actual version but the displayed version is the version of the hadoop 
> dependencies:
> This is provided by the org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl 
> which is a default implementation of ServiceRuntimeInfo. (Both OzoneManager 
> and StorageContainerManager extend this class).
> We need to use OzoneVersionInfo and HddsVersionInfo classes to display the 
> actual version instead of org.apache.hadoop.util.VersionInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-973) HDDS/Ozone fail to build on Windows

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751961#comment-16751961
 ] 

Hudson commented on HDDS-973:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15826/])
HDDS-973. HDDS/Ozone fail to build on Windows. Contributed by Xiaoyu 
(nandakumar131: rev 5dae1a0c663cf9ab0e1e0463e5121afb0fa4a83e)
* (edit) hadoop-hdds/docs/pom.xml


> HDDS/Ozone fail to build on Windows
> ---
>
> Key: HDDS-973
> URL: https://issues.apache.org/jira/browse/HDDS-973
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-973.001.patch
>
>
> Thanks [~Sammi] for reporting the issue on building hdds/ozone with Windows 
> OS. I can repro it locally and will post a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1006) AuditParser assumes incorrect log format

2019-01-24 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1006:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

> AuditParser assumes incorrect log format
> 
>
> Key: HDDS-1006
> URL: https://issues.apache.org/jira/browse/HDDS-1006
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1006.00.patch
>
>
> While creating AuditParser, I had mistakenly used incorrect test sample to 
> verify.
> Thus, due to improper column position the auditparser would yield incorrect 
> query results for columns Result, Exception and Params.
> I encountered this issue while trying to write a robot test for 
> auditparser(patch to follow soon).
> This jira aims to fix this issue and sample test data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751959#comment-16751959
 ] 

Hudson commented on HDDS-1007:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15826/])
HDDS-1007. Add robot test for AuditParser. Contributed by Dinesh 
(nandakumar131: rev 8ff9578126cb97bb4958fde78a7f08c6e2b30f4b)
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh
* (add) hadoop-ozone/dist/src/main/smoketest/auditparser/parser.robot


> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751962#comment-16751962
 ] 

Hudson commented on HDDS-1009:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15826/])
HDDS-1009. TestAbortMultipartUpload is missing the apache license text. 
(nandakumar131: rev a448b05287452d5063610fea3c10634040e205ad)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestAbortMultipartUpload.java


> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
>  in HDDS-1007



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1006) AuditParser assumes incorrect log format

2019-01-24 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751957#comment-16751957
 ] 

Nanda kumar commented on HDDS-1006:
---

Thanks [~dineshchitlangia] for the contribution and [~anu] for the review. 
Committed this to trunk.

> AuditParser assumes incorrect log format
> 
>
> Key: HDDS-1006
> URL: https://issues.apache.org/jira/browse/HDDS-1006
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1006.00.patch
>
>
> While creating AuditParser, I had mistakenly used incorrect test sample to 
> verify.
> Thus, due to improper column position the auditparser would yield incorrect 
> query results for columns Result, Exception and Params.
> I encountered this issue while trying to write a robot test for 
> auditparser(patch to follow soon).
> This jira aims to fix this issue and sample test data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14037) Fix SSLFactory truststore reloader thread leak in URLConnectionFactory

2019-01-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751953#comment-16751953
 ] 

Takanobu Asanuma commented on HDFS-14037:
-

I implemented SSLConnectionConfigurator class instead of 
{{URLConnectionFactory#newSslConnConfigurator}}. Both override 
{{ConnectionConfigurator#configure}}, and both {{configure()}} methods are only 
called in {{URLConnectionFactory#openConnection}}. Are not they the same?

> Fix SSLFactory truststore reloader thread leak in URLConnectionFactory
> --
>
> Key: HDFS-14037
> URL: https://issues.apache.org/jira/browse/HDFS-14037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, webhdfs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14037.1.patch, HDFS-14037.2.patch
>
>
> This is reported by [~yoshiata]. It is a similar issue as HADOOP-11368 and 
> YARN-5309 in URLConnectionFactory.
> {quote}SSLFactory in newSslConnConfigurator and subsequently creates the 
> ReloadingX509TrustManager instance which in turn starts a trust store 
> reloader thread.
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> {quote}
> We observed many leaked threads when we used swebhdfs via NiFi cluster.
> {noformat}
> "Truststore reloader thread" Id=221 TIMED_WAITING  on null
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:189)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1004) SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and FORCE_CLOSE events

2019-01-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1004:
---
Target Version/s: 0.4.0

> SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and 
> FORCE_CLOSE events
> -
>
> Key: HDDS-1004
> URL: https://issues.apache.org/jira/browse/HDDS-1004
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1004.001.patch
>
>
> SCMContainerManager#updateContainerStateInternal currently fails for 
> QUASI_CLOSE and FORCE_CLOSE events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-997:
--
Target Version/s: 0.4.0

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-991:
--
Target Version/s: 0.4.0

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch, HDDS-991.02.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1004) SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and FORCE_CLOSE events

2019-01-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1004:
---
Affects Version/s: 0.4.0

> SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and 
> FORCE_CLOSE events
> -
>
> Key: HDDS-1004
> URL: https://issues.apache.org/jira/browse/HDDS-1004
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1004.001.patch
>
>
> SCMContainerManager#updateContainerStateInternal currently fails for 
> QUASI_CLOSE and FORCE_CLOSE events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-997:
--
Affects Version/s: 0.4.0

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-991:
--
Affects Version/s: 0.4.0

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch, HDDS-991.02.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-990) Typos in Ozone doc

2019-01-24 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751932#comment-16751932
 ] 

Anu Engineer commented on HDDS-990:
---

+1, Thanks for make ozone documentation better. Really appreciate you taking 
time to fix this issue. 

> Typos in Ozone doc
> --
>
> Key: HDDS-990
> URL: https://issues.apache.org/jira/browse/HDDS-990
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Attachments: HDDS-990.001.patch, HDDS-990.002.patch, 
> HDDS-990.003.patch
>
>
> Fix the following issues in {{hadoop-hdds/docs/content}}:
>  # {{bucket delete}} description and example references volume instead
>  # {{compose/ozone}} doesn't launch Namenode, only {{compose/ozone-hdfs}} does
>  # Java API example doesn't compile:
>  #* use regular quotes instead of "word-processor" ones
>  #* typo in variable and class names
>  # {{delete key}} -> {{key delete}}
>  # various other typos



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-906) Display the ozone version on SCM/OM web ui instead of Hadoop version

2019-01-24 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751931#comment-16751931
 ] 

Anu Engineer commented on HDDS-906:
---

+1, Thanks for the patch. Welcome to Ozone.

> Display the ozone version on SCM/OM web ui instead of Hadoop version
> 
>
> Key: HDDS-906
> URL: https://issues.apache.org/jira/browse/HDDS-906
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM, SCM
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-906.001.patch, HDDS-906.002.patch, 
> HDDS-906.003.patch, HDDS-906.004.patch
>
>
> SCM and OM web uis (http://localhost:9876 and http://localhost:9874) display 
> the actual version but the displayed version is the version of the hadoop 
> dependencies:
> This is provided by the org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl 
> which is a default implementation of ServiceRuntimeInfo. (Both OzoneManager 
> and StorageContainerManager extend this class).
> We need to use OzoneVersionInfo and HddsVersionInfo classes to display the 
> actual version instead of org.apache.hadoop.util.VersionInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14037) Fix SSLFactory truststore reloader thread leak in URLConnectionFactory

2019-01-24 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751930#comment-16751930
 ] 

Brahma Reddy Battula commented on HDFS-14037:
-

bq.{{NamenodeHeartbeatService}} seems not to be related to 
{{URLConnectionFactory}}. Am I misunderstanding something?

Oh,Yes,I was having HDFS-13955 which introduced URLConnectionFactory..ignore it.

bq.{{DFSck}} is a stand alone command line tool and {{Util}} uses 
{{URLConnectionFactory}} as a static field. I'm not sure whether we should call 
{{URLConnectionFactory#destroy}} for them. I will reconsider it.

ok.

bq.I didn't catch it. Could you elaborate on that?

Please see, how it was before.

> Fix SSLFactory truststore reloader thread leak in URLConnectionFactory
> --
>
> Key: HDFS-14037
> URL: https://issues.apache.org/jira/browse/HDFS-14037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, webhdfs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14037.1.patch, HDFS-14037.2.patch
>
>
> This is reported by [~yoshiata]. It is a similar issue as HADOOP-11368 and 
> YARN-5309 in URLConnectionFactory.
> {quote}SSLFactory in newSslConnConfigurator and subsequently creates the 
> ReloadingX509TrustManager instance which in turn starts a trust store 
> reloader thread.
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> {quote}
> We observed many leaked threads when we used swebhdfs via NiFi cluster.
> {noformat}
> "Truststore reloader thread" Id=221 TIMED_WAITING  on null
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:189)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751929#comment-16751929
 ] 

Anu Engineer commented on HDDS-1007:


+1, Looks good to me.

 

> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1006) AuditParser assumes incorrect log format

2019-01-24 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751926#comment-16751926
 ] 

Anu Engineer commented on HDDS-1006:


+1, I will commit this shortly.

> AuditParser assumes incorrect log format
> 
>
> Key: HDDS-1006
> URL: https://issues.apache.org/jira/browse/HDDS-1006
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1006.00.patch
>
>
> While creating AuditParser, I had mistakenly used incorrect test sample to 
> verify.
> Thus, due to improper column position the auditparser would yield incorrect 
> query results for columns Result, Exception and Params.
> I encountered this issue while trying to write a robot test for 
> auditparser(patch to follow soon).
> This jira aims to fix this issue and sample test data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751923#comment-16751923
 ] 

Takanobu Asanuma commented on HDFS-14223:
-

Thanks for reviewing and committing it, [~brahmareddy]!

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Fix For: HDFS-13891
>
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{dfs.federation.router.file.resolver.client.class}} to 
> {{MultipleDestinationMountTableResolver}}. The current documents lack of the 
> explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-699) Detect Ozone Network topology

2019-01-24 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751901#comment-16751901
 ] 

Xiaoyu Yao edited comment on HDDS-699 at 1/25/19 6:07 AM:
--

Thanks [~Sammi] for working on this. Patch v2 LGTM overall. Here are some 
comments. I will add more comments on unit test later. 

 

DatanodeDetails.java

Line 220: is there a reason to remove the compareTo @Override annotation?

 

Node.java

Line 45: NIT: typo "contamination" -> "concatenation"

 

Line 72: do we really want to allow setLevel() explictly or it is for testing 
only? This may mess up with the

actual level that is set up via setParent().

 

InnerNode.java

Line 89: does the leafIndex count the excludedNodes?

 

InnerNodeImpl.java

Line 295-301: can we document this in the getLeaf() API?

 

network-topology-nodegroup.xml

Line 56 and 68:  the nodegroup in 68 does not seem to match with the prefix ng 

NodeSchema.java

Line 80-90: maybe we can add a build class to help reducing the # of 
constructors.

 

Line 102: do we assume case sensitive for the network path and prefix here?

 

NodeSchemaLoader.java

Line 126/128/130/186/187: NIT can we get these perdefined tag name defined as 
static const?

 

NodeSchemaManager.java

Line 98: can we add javadoc for completePath()?


was (Author: xyao):
Thanks [~Sammi] for working on this. Patch v2 LGTM overall. Here are some 
comments. I will add more comments on unit test later. 

 

DatanodeDetails.java

Line 220: is there a reason to remove the compareTo @Override annotation?

 

Node.java

Line 45: NIT: typo "contamination" -> "concatenation"

 

Line 72: do we really want to allow setLevel() explictly or it is for testing 
only? This may mess up with the

actual level that is set up via setParent().

 

InnerNode.java

Line 89: does the leafIndex count the excludedNodes?

 

InnerNodeImpl.java

Line 295-301: can we document this in the getLeaf() API?

 

 

NodeSchema.java

Line 80-90: maybe we can add a build class to help reducing the # of 
constructors.

 

Line 102: do we assume case sensitive for the network path and prefix here?

 

NodeSchemaLoader.java

Line 126/128/130/186/187: NIT can we get these perdefined tag name defined as 
static const?

 

NodeSchemaManager.java

Line 98: can we add javadoc for completePath()?

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14223:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

Committed to branch. [~tasanuma0829] thanks for reporting and contributing.

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Fix For: HDFS-13891
>
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{dfs.federation.router.file.resolver.client.class}} to 
> {{MultipleDestinationMountTableResolver}}. The current documents lack of the 
> explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14172) Return a default SectionName to avoid NPE

2019-01-24 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751910#comment-16751910
 ] 

Weiwei Yang commented on HDFS-14172:


Cc [~xyao], it would be nice if you can help to take a look at this one. Thanks.

> Return a default SectionName to avoid NPE
> -
>
> Key: HDFS-14172
> URL: https://issues.apache.org/jira/browse/HDFS-14172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HADOOP-14172.000.patch
>
>
> In FSImageFormatProtobuf.SectionName#fromString(), as follows:
> {code:java}
> public static SectionName fromString(String name) {
>   for (SectionName n : values) {
> if (n.name.equals(name))
>   return n;
>   }
>   return null;
> }
> {code}
> When the code meets an unknown section from the fsimage, the function will 
> return null. Callers always operates the return value with a "switch" clause, 
> like FSImageFormatProtobuf.Loader#loadInternal(), as:
> {code:java}
> switch (SectionName.fromString(n))
> {code}
> NPE will be thrown here.
> For self-protection, shall we add a default section name in the enum of 
> SectionName, like "UNKNOWN", to steer clear of NPE?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-24 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751914#comment-16751914
 ] 

Surendra Singh Lilhore commented on HDFS-14225:
---

[~elgoiri] I am verifying HDFS-13532 (Router security) patches. For webhdfs I 
faced problem where {{WebHdfsHandler.java}} in datanode is not able to resolve 
the nameservice ID. After Adding this configuration it is working fine. 

This configuration is required when we write UT for NN webhdfs with 
{{MiniRouterDFSCluster}}.

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-14225-HDFS-13891.000.patch
>
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14037) Fix SSLFactory truststore reloader thread leak in URLConnectionFactory

2019-01-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751911#comment-16751911
 ] 

Takanobu Asanuma commented on HDFS-14037:
-

Thanks for your review, [~brahmareddy].

{{NamenodeHeartbeatService}} seems not to be related to 
{{URLConnectionFactory}}. Am I misunderstanding something?

{{DFSck}} is a stand alone command line tool and {{Util}} uses 
{{URLConnectionFactory}} as a static field. I'm not sure whether we should call 
{{URLConnectionFactory#destroy}} for them. I will reconsider it.

bq. SSLConnectionConfigurator#configure needs to called. As 
URLConnectionFactory#getSSLConnectionConfiguration will give object without 
configuring(some methods didn't call on connection).

I didn't catch it. Could you elaborate on that?

> Fix SSLFactory truststore reloader thread leak in URLConnectionFactory
> --
>
> Key: HDFS-14037
> URL: https://issues.apache.org/jira/browse/HDFS-14037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, webhdfs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14037.1.patch, HDFS-14037.2.patch
>
>
> This is reported by [~yoshiata]. It is a similar issue as HADOOP-11368 and 
> YARN-5309 in URLConnectionFactory.
> {quote}SSLFactory in newSslConnConfigurator and subsequently creates the 
> ReloadingX509TrustManager instance which in turn starts a trust store 
> reloader thread.
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> {quote}
> We observed many leaked threads when we used swebhdfs via NiFi cluster.
> {noformat}
> "Truststore reloader thread" Id=221 TIMED_WAITING  on null
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:189)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751913#comment-16751913
 ] 

Brahma Reddy Battula commented on HDFS-14223:
-

+1, will commit shortly.

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{dfs.federation.router.file.resolver.client.class}} to 
> {{MultipleDestinationMountTableResolver}}. The current documents lack of the 
> explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14210) RBF: ModifyACL should work over all the destinations

2019-01-24 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751905#comment-16751905
 ] 

Shubham Dewan commented on HDFS-14210:
--

[~elgoiri], I checked ,under relative-path 
+_org/apache/hadoop/hdfs/server/federation/router/_+   only 
*TestRouterAllResolver* is using MultipleDestinationMountTableResolver.

> RBF: ModifyACL should work over all the destinations
> 
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Attachments: HDFS-14210-HDFS-13891.002.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-01-24 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751901#comment-16751901
 ] 

Xiaoyu Yao commented on HDDS-699:
-

Thanks [~Sammi] for working on this. Patch v2 LGTM overall. Here are some 
comments. I will add more comments on unit test later. 

 

DatanodeDetails.java

Line 220: is there a reason to remove the compareTo @Override annotation?

 

Node.java

Line 45: NIT: typo "contamination" -> "concatenation"

 

Line 72: do we really want to allow setLevel() explictly or it is for testing 
only? This may mess up with the

actual level that is set up via setParent().

 

InnerNode.java

Line 89: does the leafIndex count the excludedNodes?

 

InnerNodeImpl.java

Line 295-301: can we document this in the getLeaf() API?

 

 

NodeSchema.java

Line 80-90: maybe we can add a build class to help reducing the # of 
constructors.

 

Line 102: do we assume case sensitive for the network path and prefix here?

 

NodeSchemaLoader.java

Line 126/128/130/186/187: NIT can we get these perdefined tag name defined as 
static const?

 

NodeSchemaManager.java

Line 98: can we add javadoc for completePath()?

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14227) RBF:HDFS "dfsadmin -printTopology" not displaying the rack details properly

2019-01-24 Thread venkata ramkumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ramkumar updated HDFS-14227:

Affects Version/s: 3.1.1

> RBF:HDFS "dfsadmin -printTopology" not displaying the rack details properly
> ---
>
> Key: HDFS-14227
> URL: https://issues.apache.org/jira/browse/HDFS-14227
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Minor
>  Labels: RBF
>
> namespaces : hacluster1 ,hacluster2
> under hacluster1 :(IP1, IP2)
> under hacluster2 :(IP3,IP4)
> commands :
> {noformat}
> /router/bin> ./hdfs dfsadmin -printTopology
> 19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Rack: /hacluster1/default-rack
>IP1:9866 (BLR121217)
>IP2:9866 (linux-110)
>IP3:9866 (linux111)
>IP4:9866 (linux112)
> {noformat}
> expected o/p:
> {noformat}
> /router/bin> ./hdfs dfsadmin -printTopology
> 19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Rack: /hacluster1/default-rack
>IP1:9866 (BLR121217)
>IP2:9866 (linux-110)
> Rack: /hacluster2/default-rack
>IP3:9866 (linux111)
>IP4:9866 (linux112)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14227) RBF:HDFS "dfsadmin -printTopology" not displaying the rack details properly

2019-01-24 Thread venkata ramkumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ramkumar updated HDFS-14227:

Labels: RBF  (was: )

> RBF:HDFS "dfsadmin -printTopology" not displaying the rack details properly
> ---
>
> Key: HDFS-14227
> URL: https://issues.apache.org/jira/browse/HDFS-14227
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Minor
>  Labels: RBF
>
> namespaces : hacluster1 ,hacluster2
> under hacluster1 :(IP1, IP2)
> under hacluster2 :(IP3,IP4)
> commands :
> {noformat}
> /router/bin> ./hdfs dfsadmin -printTopology
> 19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Rack: /hacluster1/default-rack
>IP1:9866 (BLR121217)
>IP2:9866 (linux-110)
>IP3:9866 (linux111)
>IP4:9866 (linux112)
> {noformat}
> expected o/p:
> {noformat}
> /router/bin> ./hdfs dfsadmin -printTopology
> 19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Rack: /hacluster1/default-rack
>IP1:9866 (BLR121217)
>IP2:9866 (linux-110)
> Rack: /hacluster2/default-rack
>IP3:9866 (linux111)
>IP4:9866 (linux112)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751876#comment-16751876
 ] 

Hadoop QA commented on HDDS-991:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-ozone: The patch generated 14 new + 4 
unchanged - 8 fixed = 18 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 34s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
37s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-991 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956264/HDDS-991.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 154528faffa9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 3c60303 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2108/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2108/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2108/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2108/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1081 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2108/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>

[jira] [Commented] (HDDS-993) Update hadoop version to 3.2.0

2019-01-24 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751875#comment-16751875
 ] 

Dinesh Chitlangia commented on HDDS-993:


[~arpitagarwal] +1 it fails for me with the same error.

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-993.000.patch
>
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751870#comment-16751870
 ] 

Hadoop QA commented on HDDS-991:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-ozone: The patch generated 14 new + 4 
unchanged - 8 fixed = 18 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 43s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 48s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.hdds.scm.container.TestContainerActionsHandler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-991 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956263/HDDS-991.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux c04fc23adeb3 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 3c60303 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2107/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2107/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2107/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2107/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2107/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1116 (vs. ulimit of 1) |
| modules | C: hadoop-ozone hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2107/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix failures 

[jira] [Comment Edited] (HDDS-993) Update hadoop version to 3.2.0

2019-01-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751861#comment-16751861
 ] 

Arpit Agarwal edited comment on HDDS-993 at 1/25/19 3:46 AM:
-

[~ajayydv], your error looks like some kind of JDK issue. Probably unrelated to 
the patch. The last time I saw this exact error was with a bad JDK11 install on 
my dev laptop. It went away after wiping and reinstalling the JDK.

The test fails for me with a different error:
{code}
[INFO] Running org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.641 
s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] testDelegationToken(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
Time elapsed: 4.822 s  <<< ERROR!
java.io.IOException: Renew Delegation Token failed, error : INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.renewDelegationToken(OzoneManagerProtocolClientSideTranslatorPB.java:1210)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.lambda$testDelegationToken$4(TestSecureOzoneCluster.java:419)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:377)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:446)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testDelegationToken(TestSecureOzoneCluster.java:417)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


was (Author: arpitagarwal):
[~ajayydv], this looks like some kind of JDK issue. Probably unrelated to the 
patch.

The test fails for me with a different error:
{code}
[INFO] Running org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.641 
s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] testDelegationToken(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
Time elapsed: 4.822 s  <<< ERROR!
java.io.IOException: Renew Delegation Token failed, error : INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.renewDelegationToken(OzoneManagerProtocolClientSideTranslatorPB.java:1210)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.lambda$testDelegationToken$4(TestSecureOzoneCluster.java:419)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:377)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:446)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testDelegationToken(TestSecureOzoneCluster.java:417)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: 

[jira] [Commented] (HDDS-993) Update hadoop version to 3.2.0

2019-01-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751861#comment-16751861
 ] 

Arpit Agarwal commented on HDDS-993:


[~ajayydv], this looks like some kind of JDK issue. Probably unrelated to the 
patch.

The test fails for me with a different error:
{code}
[INFO] Running org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.641 
s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] testDelegationToken(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
Time elapsed: 4.822 s  <<< ERROR!
java.io.IOException: Renew Delegation Token failed, error : INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.renewDelegationToken(OzoneManagerProtocolClientSideTranslatorPB.java:1210)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.lambda$testDelegationToken$4(TestSecureOzoneCluster.java:419)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:377)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:446)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testDelegationToken(TestSecureOzoneCluster.java:417)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-993.000.patch
>
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1010) ContainerSet#getContainerMap should be renamed

2019-01-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1010:
---

Assignee: Supratim Deka

> ContainerSet#getContainerMap should be renamed
> --
>
> Key: HDDS-1010
> URL: https://issues.apache.org/jira/browse/HDDS-1010
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> ContainerSet#getContainerMap should be renamed to something like 
> getContainerMapCopy to make it explicit that it creates a copy of the entire 
> container map! Also it should be tagged with {{@VisibleForTesting}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1010) ContainerSet#getContainerMap should be renamed

2019-01-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1010:
---

 Summary: ContainerSet#getContainerMap should be renamed
 Key: HDDS-1010
 URL: https://issues.apache.org/jira/browse/HDDS-1010
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


ContainerSet#getContainerMap should be renamed to something like 
getContainerMapCopy to make it explicit that it creates a copy of the entire 
container map! Also it should be tagged with {{@VisibleForTesting}}.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751837#comment-16751837
 ] 

Hadoop QA commented on HDFS-14223:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
20s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956247/HDFS-14223-HDFS-13891.2.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 8bf6cd2eaf23 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26051/testReport/ |
| Max. process+thread count | 975 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26051/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu 

[jira] [Updated] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-991:

Attachment: HDDS-991.02.patch

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch, HDDS-991.02.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-991:

Attachment: (was: HDDS-991.02.patch)

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751834#comment-16751834
 ] 

Hadoop QA commented on HDFS-14215:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
30s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14215 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956244/HDFS-14215-HDFS-13891-07.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux da96b1e71e5c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26050/testReport/ |
| Max. process+thread count | 1463 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26050/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> 

[jira] [Commented] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751831#comment-16751831
 ] 

Ajay Kumar commented on HDDS-991:
-

[~xyao] thanks for review. Addressed them in patch v2. 
TOKEN_ERROR_OTHER? in which case will we use this? what's the difference 
between this and UNKNOWN.
TOKEN_ERROR_OTHER is used when we get some unexpected IOException. UNKNOWN is 
pretty generic (more like a placeholder) and intended to be used when we can't 
conclude anything about error.

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch, HDDS-991.02.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-991:

Attachment: HDDS-991.02.patch

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch, HDDS-991.02.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751819#comment-16751819
 ] 

Bharat Viswanadham commented on HDDS-1009:
--

+1.
Thank You [~dineshchitlangia] for taking care of this.
Will commit this shortly.

> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
>  in HDDS-1007



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751816#comment-16751816
 ] 

Dinesh Chitlangia commented on HDDS-1009:
-

failure unrelated to patch

> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
>  in HDDS-1007



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751818#comment-16751818
 ] 

Dinesh Chitlangia commented on HDDS-1009:
-

cc: [~bharatviswa]

> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
>  in HDDS-1007



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751813#comment-16751813
 ] 

Hadoop QA commented on HDFS-14224:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
35s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956229/HDFS-14224-HDFS-13891-05.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 688a2d283d2b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26049/testReport/ |
| Max. process+thread count | 974 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26049/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple 
> destinations
> 

[jira] [Commented] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751798#comment-16751798
 ] 

Hadoop QA commented on HDDS-1009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m  6s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
19s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestSecureOzoneCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1009 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956235/HDDS-1009.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 79816021ab36 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build@2/ozone.sh |
| git revision | trunk / a33ef4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2106/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2106/testReport/ |
| Max. process+thread count | 1101 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2106/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> 

[jira] [Updated] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-24 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14215:

Attachment: HDFS-14215-HDFS-13891-07.patch

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch, 
> HDFS-14215-HDFS-13891-04.patch, HDFS-14215-HDFS-13891-05.patch, 
> HDFS-14215-HDFS-13891-05.patch, HDFS-14215-HDFS-13891-06.patch, 
> HDFS-14215-HDFS-13891-07.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14223:

Description: When using multiple sub-clusters for a mount point, we need to 
set {{dfs.federation.router.file.resolver.client.class}} to 
{{MultipleDestinationMountTableResolver}}. The current documents lack of the 
explanation. We should add it to HDFSRouterFederation.md and 
hdfs-rbf-default.xml.  (was: When using multiple sub-clusters for a mount 
point, we need to set {{MultipleDestinationMountTableResolver}} to 
{{dfs.federation.router.file.resolver.client.class}}. The current documents 
lack of the explanation. We should add it to HDFSRouterFederation.md and 
hdfs-rbf-default.xml.)

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{dfs.federation.router.file.resolver.client.class}} to 
> {{MultipleDestinationMountTableResolver}}. The current documents lack of the 
> explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751790#comment-16751790
 ] 

Takanobu Asanuma commented on HDFS-14223:
-

Thanks for the review, [~brahmareddy]. Uploaded 2nd patch.

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{MultipleDestinationMountTableResolver}} to 
> {{dfs.federation.router.file.resolver.client.class}}. The current documents 
> lack of the explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-24 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14223:

Attachment: HDFS-14223-HDFS-13891.2.patch

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{MultipleDestinationMountTableResolver}} to 
> {{dfs.federation.router.file.resolver.client.class}}. The current documents 
> lack of the explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-24 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751789#comment-16751789
 ] 

Ayush Saxena commented on HDFS-14215:
-

Uploaded v7 with said changes.

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch, 
> HDFS-14215-HDFS-13891-04.patch, HDFS-14215-HDFS-13891-05.patch, 
> HDFS-14215-HDFS-13891-05.patch, HDFS-14215-HDFS-13891-06.patch, 
> HDFS-14215-HDFS-13891-07.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-993) Update hadoop version to 3.2.0

2019-01-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751765#comment-16751765
 ] 

Ajay Kumar commented on HDDS-993:
-

[~elek] [~sdeka] after this change i see below error while running 
{{TestSecureOzoneCluster}} from intellij.It goes away when i switch back the 
dependency to 3.2.1-snapshot.
{code}
2019-01-24 11:53:01,669 ERROR ipc.Server (Server.java:doRunLoop(1123)) - Bug in 
read selector!
java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
{code}

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-993.000.patch
>
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751768#comment-16751768
 ] 

Hadoop QA commented on HDDS-936:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m  
6s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
25s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 58s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
3s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-936 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956231/HDDS-936.05.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 6e062ee7386b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / a33ef4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2105/artifact/out/patch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2105/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2105/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2105/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2105/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 197 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/integration-test hadoop-ozone/tools U: hadoop-ozone 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2105/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun 

[jira] [Comment Edited] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751753#comment-16751753
 ] 

Dinesh Chitlangia edited comment on HDDS-1007 at 1/25/19 12:55 AM:
---

failure & license violation unrelated to patch.
filed HDDS-1009 to fix license violation issue.


was (Author: dineshchitlangia):
failure & license violation unrelated to patch.


> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-1009:
---

 Summary: TestAbortMultipartUpload is missing the apache license 
text
 Key: HDDS-1009
 URL: https://issues.apache.org/jira/browse/HDDS-1009
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3, test
Affects Versions: 0.4.0
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia
 Attachments: HDDS-1009.00.patch

This was flagged by [Jenkins 
run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
 in HDDS-1007




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-1009:

Attachment: HDDS-1009.00.patch
Status: Patch Available  (was: Open)

> TestAbortMultipartUpload is missing the apache license text
> ---
>
> Key: HDDS-1009
> URL: https://issues.apache.org/jira/browse/HDDS-1009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3, test
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1009.00.patch
>
>
> This was flagged by [Jenkins 
> run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
>  in HDDS-1007



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751753#comment-16751753
 ] 

Dinesh Chitlangia commented on HDDS-1007:
-

failure & license violation unrelated to patch.


> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1008) Invalidate closed container replicas on a failed volume

2019-01-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1008:
---

 Summary: Invalidate closed container replicas on a failed volume
 Key: HDDS-1008
 URL: https://issues.apache.org/jira/browse/HDDS-1008
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


When a volume is detected as failed, all closed containers on the volume should 
be marked as invalid.

Open containers will be handled separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun singla updated HDDS-936:
--
Attachment: HDDS-936.05.patch

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations

2019-01-24 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751748#comment-16751748
 ] 

Ayush Saxena commented on HDFS-14224:
-

[~elgoiri] I have put it alone in TestRouterRpc and used a common path / which 
is available in both tests, when the test shall run as part of TestRouterRpc it 
shall verify the single dest scenario and the multi destination part when it is 
extended in TestRouterRpcMultiDestination.

I guess that covers both scenario and should be fair enough for us. 
Pls Review :)

> RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple 
> destinations
> --
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch, 
> HDFS-14224-HDFS-13891-04.patch, HDFS-14224-HDFS-13891-05.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14224) RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations

2019-01-24 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14224:

Attachment: HDFS-14224-HDFS-13891-05.patch

> RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple 
> destinations
> --
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch, 
> HDFS-14224-HDFS-13891-04.patch, HDFS-14224-HDFS-13891-05.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun singla updated HDDS-936:
--
Attachment: (was: HDDS-936.05.patch)

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun singla updated HDDS-936:
--
Attachment: HDDS-936.05.patch

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-24 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751724#comment-16751724
 ] 

Xiaoyu Yao commented on HDDS-991:
-

Thanks [~ajayydv] for the patch. It looks great to me. I just have two minor 
comments:

 

OMException.java

Line 129: NIT: suggested renaming to be consistent:

 

TOKEN_ERROR_INVALID_AUTH_METHOD-> INVALID_AUTH_METHOD

TOKEN_ERROR_INVALID_TOKEN->INVALID_TOKEN

TOKEN_ERROR_EXPIRED->TOKEN_EXPIRED

 

TOKEN_ERROR_OTHER? in which case will we use this? what's the difference 
between this and UNKNOWN.

 

 

TestSecureOzoneCluster.java

Line 417/449/515: can we validate specific OMException code newly added here?

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751716#comment-16751716
 ] 

Hadoop QA commented on HDDS-936:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m  
6s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
23s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 54s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
21s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-936 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956218/HDDS-936.04.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 099d23eedfea 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 4e0aa2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2104/artifact/out/patch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2104/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2104/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2104/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2104/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 198 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/integration-test hadoop-ozone/tools U: hadoop-ozone 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2104/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun 

[jira] [Commented] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751692#comment-16751692
 ] 

Hadoop QA commented on HDDS-1007:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 40s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
11s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.TestSecureOzoneCluster |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1007 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956211/HDDS-1007.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  |
| uname | Linux 8bf94aa7aae4 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 4e0aa2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2103/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2103/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2103/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1079 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2103/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org

[jira] [Updated] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun singla updated HDDS-936:
--
Attachment: HDDS-936.04.patch

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751660#comment-16751660
 ] 

Hadoop QA commented on HDDS-936:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
20s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
29s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 19s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
21s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-936 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956209/HDDS-936.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 0f93873b171c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 4e0aa2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2102/artifact/out/patch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2102/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2102/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2102/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2102/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 187 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/integration-test hadoop-ozone/tools U: hadoop-ozone 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2102/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: 

[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751658#comment-16751658
 ] 

Hadoop QA commented on HDFS-14202:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 12 unchanged - 0 fixed = 16 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14202 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956194/HDFS-14202.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d4a2ab35b55c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4e0aa2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26047/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26047/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26047/testReport/ |
| Max. process+thread count | 2984 (vs. ulimit of 1) |
| modules | 

[jira] [Commented] (HDFS-14188) Make hdfs ec -verifyClusterSetup command accept an erasure coding policy as a parameter

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751657#comment-16751657
 ] 

Hadoop QA commented on HDFS-14188:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14188 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956195/HDFS-14188.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0337416d6a95 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4e0aa2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26048/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26048/testReport/ |
| Max. process+thread count | 3285 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26048/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Updated] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-1007:

Attachment: HDDS-1007.00.patch
Status: Patch Available  (was: Open)

This must be committed after HDDS-1006
cc: [~anu]

> Add robot test for AuditParser
> --
>
> Key: HDDS-1007
> URL: https://issues.apache.org/jira/browse/HDDS-1007
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test, Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-1007.00.patch
>
>
> This jira aims to add Robot test for AuditParser tool.
> The robot test must run freon in order to generate audit log and then test 
> the auditparser commands.
> We have separate audit logs for OM, SCM, DN. However, for the robot test, 
> just testing for OM is sufficient since the logs are generated using a common 
> mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751638#comment-16751638
 ] 

sarun singla edited comment on HDDS-936 at 1/24/19 10:31 PM:
-

[~swagle]  [~elek] The main intention of the Jira when discussed with [~jnp] 
was to make a standalone tool which parses the DB that gives a detailed 
breakdown of the container to block and object mapping.

We can discuss the improvements and upgrades on HDDS-1005 

[~elek] I have moved the code under 'hadoop-ozone/tools'


was (Author: saruntek):
[~swagle] [~jnp] The main intention of the Jira was to make a standalone tool 
which parses the DB and gives a detailed analysis of the container to block and 
object mapping. We can discuss the improvements and upgrades on HDDS-1005

[~elek] I have moved the code under 'hadoop-ozone/tools'

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-752) Functionality to handle key rotation in SCM

2019-01-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-752:
---
Target Version/s:   (was: 0.4.0)

> Functionality to handle key rotation in SCM
> ---
>
> Key: HDDS-752
> URL: https://issues.apache.org/jira/browse/HDDS-752
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Functionality to handle key rotation in SCM, OM and DN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-989) Check Hdds Volumes for errors

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751633#comment-16751633
 ] 

Hadoop QA commented on HDDS-989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
38s{color} | {color:red} root generated 2 new + 18 unchanged - 0 fixed = 20 
total (was 18) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 32s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
11s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.TestSecureOzoneCluster |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956199/HDDS-989.06.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux ba245ce27a28 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 4e0aa2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2101/artifact/out/diff-javadoc-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2101/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2101/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2101/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1100 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2101/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  

[jira] [Updated] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun singla updated HDDS-936:
--
Attachment: HDDS-936.03.patch

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-01-24 Thread sarun singla (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751638#comment-16751638
 ] 

sarun singla commented on HDDS-936:
---

[~swagle] [~jnp] The main intention of the Jira was to make a standalone tool 
which parses the DB and gives a detailed analysis of the container to block and 
object mapping. We can discuss the improvements and upgrades on HDDS-1005

[~elek] I have moved the code under 'hadoop-ozone/tools'

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-989) Check Hdds Volumes for errors

2019-01-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751501#comment-16751501
 ] 

Arpit Agarwal commented on HDDS-989:


v06:
- Fix one more Javadoc issue. The remaining issues are in a Guava class and 
will be ignored.

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch, 
> HDDS-989.04.patch, HDDS-989.05.patch, HDDS-989.06.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-989) Check Hdds Volumes for errors

2019-01-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-989:
---
Attachment: HDDS-989.06.patch

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch, 
> HDDS-989.04.patch, HDDS-989.05.patch, HDDS-989.06.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751453#comment-16751453
 ] 

Hadoop QA commented on HDFS-14215:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 1s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
31s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14215 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956180/HDFS-14215-HDFS-13891-06.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5f80601ec125 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26046/testReport/ |
| Max. process+thread count | 1008 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26046/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> 

[jira] [Commented] (HDDS-948) MultipartUpload: S3 API for Abort Multipart Upload

2019-01-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751475#comment-16751475
 ] 

Hudson commented on HDDS-948:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15823 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15823/])
HDDS-948. MultipartUpload: S3 API for Abort Multipart Upload. (elek: rev 
4e0aa2ceac893b2f7f9b8d480cb83c840bf22b95)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestAbortMultipartUpload.java
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectDelete.java


> MultipartUpload: S3 API for Abort Multipart Upload
> --
>
> Key: HDDS-948
> URL: https://issues.apache.org/jira/browse/HDDS-948
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-948.00.patch
>
>
> Implement S3API for multipart upload.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-24 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751466#comment-16751466
 ] 

Ranith Sardar commented on HDFS-14202:
--

[~elgoiri] , thanks for reviewing the patch.

In my current patch, modified the patch and added test case for computeDelay() 
method.

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14188) Make hdfs ec -verifyClusterSetup command accept an erasure coding policy as a parameter

2019-01-24 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-14188:

Attachment: HDFS-14188.002.patch

> Make hdfs ec -verifyClusterSetup command accept an erasure coding policy as a 
> parameter
> ---
>
> Key: HDFS-14188
> URL: https://issues.apache.org/jira/browse/HDFS-14188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-14188.001.patch, HDFS-14188.002.patch
>
>
> hdfs ec -verifyClusterSetup command verifies if there are enough data nodes 
> and racks for the enabled erasure coding policies
> I think it would be beneficial if it could accept an erasure coding policy as 
> a parameter optionally. For example the following command would run the 
> verify for only the RS-6-3-1024k policy.
> {code:java}
> hdfs ec -verifyClusterSetup -policy RS-6-3-1024k
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-24 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14202:
-
Attachment: HDFS-14202.002.patch

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >