[jira] [Commented] (HDDS-603) Add BlockCommitSequenceId field per Container and expose it in Container Reports

2018-10-10 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645990#comment-16645990
 ] 

Jitendra Nath Pandey commented on HDDS-603:
---

I think we should update the helper class {{ContainerReport}} as well to 
include BCS.

> Add BlockCommitSequenceId field per Container and expose it in Container 
> Reports
> 
>
> Key: HDDS-603
> URL: https://issues.apache.org/jira/browse/HDDS-603
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-603.000.patch
>
>
> HDDS-450 adds a blockCommitSequenceId filed per block commit in container Db. 
> The blockCommitSequenceId now needs to be updated per container replica and 
> the same needs to be reported to SCM via container reports. This Jira aims to 
> address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645799#comment-16645799
 ] 

Bharat Viswanadham edited comment on HDDS-516 at 10/11/18 5:23 AM:
---

Dependant on HDDS-522. This patch needs to be applied on top of HDDS-522
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I am using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
s3://bucket1/dir1/dir2/file

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|


was (Author: bharatviswa):
Dependant on HDDS-522.
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I am using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
s3://bucket1/dir1/dir2/file

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-516:

Attachment: (was: HDDS-516.03.patch)

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-516:

Attachment: HDDS-516.03.patch

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-10 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645982#comment-16645982
 ] 

Ajay Kumar commented on HDDS-580:
-

[~xyao] thanks for review. Addressed all comments in patch v1. Failure in 
TestOzoneConfigurationFields is not related to this patch but fixed it .

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-10 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-580:

Attachment: HDDS-580-HDDS-4.01.patch

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13982) convertStorageType() in PBHelperClient is not easy to extend when adding new storage types

2018-10-10 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-13982:
--

Assignee: (was: Xiang Li)

> convertStorageType() in PBHelperClient is not easy to extend when adding new 
> storage types
> --
>
> Key: HDFS-13982
> URL: https://issues.apache.org/jira/browse/HDFS-13982
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Xiang Li
>Priority: Minor
>
> In PBHelperClient, there are 2 functions to convert between StorageTypeProto 
> and StorageType, like:
> {code:java}
> public static StorageTypeProto convertStorageType(StorageType type) {
>   switch(type) {
>   case DISK:
> return StorageTypeProto.DISK;
>   case SSD:
> return StorageTypeProto.SSD;
>   case ARCHIVE:
> return StorageTypeProto.ARCHIVE;
>   case RAM_DISK:
> return StorageTypeProto.RAM_DISK;
>   case PROVIDED:
> return StorageTypeProto.PROVIDED;
>   default:
> throw new IllegalStateException(
> "BUG: StorageType not found, type=" + type);
>   }
> }
> public static StorageType convertStorageType(StorageTypeProto type) {
>   switch(type) {
>   case DISK:
> return StorageType.DISK;
>   case SSD:
> return StorageType.SSD;
>   case ARCHIVE:
> return StorageType.ARCHIVE;
>   case RAM_DISK:
> return StorageType.RAM_DISK;
>   case PROVIDED:
> return StorageType.PROVIDED;
>   default:
> throw new IllegalStateException(
> "BUG: StorageTypeProto not found, type=" + type);
>   }
> }
> {code}
> When there is a need to add a new storage type, we need to add a "case" 
> clause here. It is not quite convenient. And it is easy to forget changing 
> this file, because the newcomers always focus on the change in 
> StorageType.java (to add new storage types).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13982) convertStorageType() in PBHelperClient is not easy to extend when adding new storage types

2018-10-10 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-13982:
--

Assignee: Xiang Li

> convertStorageType() in PBHelperClient is not easy to extend when adding new 
> storage types
> --
>
> Key: HDFS-13982
> URL: https://issues.apache.org/jira/browse/HDFS-13982
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> In PBHelperClient, there are 2 functions to convert between StorageTypeProto 
> and StorageType, like:
> {code:java}
> public static StorageTypeProto convertStorageType(StorageType type) {
>   switch(type) {
>   case DISK:
> return StorageTypeProto.DISK;
>   case SSD:
> return StorageTypeProto.SSD;
>   case ARCHIVE:
> return StorageTypeProto.ARCHIVE;
>   case RAM_DISK:
> return StorageTypeProto.RAM_DISK;
>   case PROVIDED:
> return StorageTypeProto.PROVIDED;
>   default:
> throw new IllegalStateException(
> "BUG: StorageType not found, type=" + type);
>   }
> }
> public static StorageType convertStorageType(StorageTypeProto type) {
>   switch(type) {
>   case DISK:
> return StorageType.DISK;
>   case SSD:
> return StorageType.SSD;
>   case ARCHIVE:
> return StorageType.ARCHIVE;
>   case RAM_DISK:
> return StorageType.RAM_DISK;
>   case PROVIDED:
> return StorageType.PROVIDED;
>   default:
> throw new IllegalStateException(
> "BUG: StorageTypeProto not found, type=" + type);
>   }
> }
> {code}
> When there is a need to add a new storage type, we need to add a "case" 
> clause here. It is not quite convenient. And it is easy to forget changing 
> this file, because the newcomers always focus on the change in 
> StorageType.java (to add new storage types).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2018-10-10 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645974#comment-16645974
 ] 

Weiwei Yang commented on HDFS-12459:


Got it, thanks [~jojochuang]!

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch, 
> HDFS-12459.006.patch, HDFS-12459.006.patch, HDFS-12459.007.patch, 
> HDFS-12459.008.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-522) Implement PutBucket REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645972#comment-16645972
 ] 

Bharat Viswanadham edited comment on HDDS-522 at 10/11/18 4:47 AM:
---

And also right now added bucket creation tests to a new file, as endpURL is 
different for this, I will merge this back during HDDS-516.


was (Author: bharatviswa):
And also right now bucket creation tests to a new file, as endpoint url is 
different for this, I will merge this back during HDDS-516.

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch, HDDS-522.01.patch, HDDS-522.02.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-522) Implement PutBucket REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645972#comment-16645972
 ] 

Bharat Viswanadham commented on HDDS-522:
-

And also right now bucket creation tests to a new file, as endpoint url is 
different for this, I will merge this back during HDDS-516.

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch, HDDS-522.01.patch, HDDS-522.02.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13973) HDFS getErasureCodingPolicy command records incorrect audit event

2018-10-10 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13973:
--
Attachment: HDFS-13973.001.patch

FSnamesystem # getErasureCodingPolicy() has null passed in place of src for the 
logAuditEvent(). The patch addresses this issue by having src path passed for 
the logAuditEvent().

Thank you [~hgadre], [~xiaochen] for helping me on this. I have uploaded the 
patch.
Please review and suggest in any modifications needed. 

> HDFS getErasureCodingPolicy command records incorrect audit event
> -
>
> Key: HDFS-13973
> URL: https://issues.apache.org/jira/browse/HDFS-13973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13973.001.patch
>
>
> Value for the 'src' field is missing from the audit events for 
> getErasureCodingPolicy().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-522) Implement PutBucket REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645964#comment-16645964
 ] 

Bharat Viswanadham commented on HDDS-522:
-

Attached patch v02.

Found a case that when only Environment variables ACCESS_KEY_ID and 

AWS_SECRET_ACCESS_KEY are set, aws client constructs v2 header. So, added logic 
to extract access key id and use it as username.

Tested both V2 and V4 headers in robot test cases.

 

 
{code:java}
Executing test s3 with 
/Users/bviswanadham/workspace/open-hadoop/hadoop/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/smoketest/../compose/ozones3/docker-compose.yaml
Removing network ozones3_default
WARNING: Network ozones3_default not found.
Creating network "ozones3_default" with the default driver
Creating ozones3_ozoneManager_1 ... 
Creating ozones3_datanode_1 ... 
Creating ozones3_s3g_1 ... 
Creating ozones3_scm_1 ... 
Creating ozones3_s3g_1
Creating ozones3_datanode_1
Creating ozones3_scm_1
Creating ozones3_datanode_1 ... done
Waiting 30s for cluster start up...
==
S3                                                                            
==
S3.Awscli :: S3 gateway test with aws cli                                     
==
Create volume and bucket for the tests                                | PASS |
--
Install aws s3 cli                                                    | PASS |
--
File upload and directory list                                        | PASS |
--
S3.Awscli :: S3 gateway test with aws cli                             | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
S3.Bucket :: S3 gateway test with aws cli for bucket operations               
==
Install aws s3 cli                                                    | PASS |
--
Create Bucket                                                         | PASS |
--
V4 Header                                                             | PASS |
--
Create Bucket after change in env                                     | PASS |
--
S3.Bucket :: S3 gateway test with aws cli for bucket operations       | PASS |
4 critical tests, 4 passed, 0 failed
4 tests total, 4 passed, 0 failed
==
S3                                                                    | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
{code}
 

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch, HDDS-522.01.patch, HDDS-522.02.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-522) Implement PutBucket REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-522:

Attachment: HDDS-522.02.patch

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch, HDDS-522.01.patch, HDDS-522.02.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645944#comment-16645944
 ] 

Hadoop QA commented on HDDS-621:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} docs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-621 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943345/HDDS-621.003.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1840d27102a1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645939#comment-16645939
 ] 

Hadoop QA commented on HDDS-621:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} docs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-621 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943342/HDDS-621.002.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d93dc9ac404c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Comment Edited] (HDDS-625) putKey hangs for a long time after completion, sometimes forever

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645882#comment-16645882
 ] 

Arpit Agarwal edited comment on HDDS-625 at 10/11/18 3:47 AM:
--

Thread dump attached - looks like some non-daemon ThreadPoolExecutor threads 
are holding up process shutdown.


was (Author: arpitagarwal):
Thread dump attached - likely we are not shutting down some petty resource.

> putKey hangs for a long time after completion, sometimes forever
> 
>
> Key: HDDS-625
> URL: https://issues.apache.org/jira/browse/HDDS-625
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Priority: Blocker
> Attachments: ozone-shell-thread-dump.txt
>
>
> putKey hangs, sometimes forever.
> TRACE log output in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-616) Collect al the robot test output and return with the right exit code

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645910#comment-16645910
 ] 

Anu Engineer commented on HDDS-616:
---

+1, go ahead and commit if you are confident. All tests on my local box and 
failing, I need to investigate why.

 

> Collect al the robot test output and return with the right exit code
> 
>
> Key: HDDS-616
> URL: https://issues.apache.org/jira/browse/HDDS-616
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-616.001.patch
>
>
> In the current acceptance test runner bash script 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) the output of the test 
> executions are overridden by the following test executions.
> An other problem is the exit code is always 0. In case of a failing test the 
> exit code should be non-zero at the end of the execution.
> The easiest way to fix these issues is using the rebot tool from robot 
> framework distribution. rebot is similar to the robot but instead of 
> executing tests it just render the html report from previous test output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645891#comment-16645891
 ] 

Arpit Agarwal commented on HDDS-621:


Thanks! +1 pending Jenkins.

true is okay though. :) 

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch, 
> HDDS-621.003.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645887#comment-16645887
 ] 

Dinesh Chitlangia commented on HDDS-621:


[~arpitagarwal] Attached patch 003 to address the review comments and also 
removed hard coded value - {{true}}

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch, 
> HDDS-621.003.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-621:
---
Attachment: HDDS-621.003.patch

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch, 
> HDDS-621.003.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-626) ozone.metadata.dirs should be tagged as REQUIRED

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645883#comment-16645883
 ] 

Dinesh Chitlangia commented on HDDS-626:


[~arpitagarwal] I see the REQUIRED tag is already available for 
{{ozone.metadata.dirs}} in trunk. 

{panel:title=https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/resources/ozone-default.xml}
{code:java}

ozone.metadata.dirs

OZONE, OM, SCM, CONTAINER, REQUIRED, STORAGE

Ozone metadata is shared among OM, which acts as the namespace
manager for ozone, SCM which acts as the block manager and data nodes
which maintain the name of the key(Key Name and BlockIDs). This
replicated and distributed metadata store is maintained under the
directory pointed by this key. Since metadata can be I/O intensive, at
least on OM and SCM we recommend having SSDs. If you have the luxury
of mapping this path to SSDs on all machines in the cluster, that will
be excellent.
If Ratis metadata directories are not specified, Ratis server will emit a
warning and use this path for storing its metadata too.

{code}
{panel}
 
I will resolve it after your +1

> ozone.metadata.dirs should be tagged as REQUIRED
> 
>
> Key: HDDS-626
> URL: https://issues.apache.org/jira/browse/HDDS-626
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> ozone.metadata.dirs is a required config but is missing the REQUIRED tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-625) putKey hangs for a long time after completion, sometimes forever

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645882#comment-16645882
 ] 

Arpit Agarwal commented on HDDS-625:


Thread dump attached - likely we are not shutting down some petty resource.

> putKey hangs for a long time after completion, sometimes forever
> 
>
> Key: HDDS-625
> URL: https://issues.apache.org/jira/browse/HDDS-625
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Priority: Blocker
> Attachments: ozone-shell-thread-dump.txt
>
>
> putKey hangs, sometimes forever.
> TRACE log output in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-625) putKey hangs for a long time after completion, sometimes forever

2018-10-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-625:
---
Attachment: ozone-shell-thread-dump.txt

> putKey hangs for a long time after completion, sometimes forever
> 
>
> Key: HDDS-625
> URL: https://issues.apache.org/jira/browse/HDDS-625
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Priority: Blocker
> Attachments: ozone-shell-thread-dump.txt
>
>
> putKey hangs, sometimes forever.
> TRACE log output in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-619) hdds.db.profile should not be tagged as a required setting & should default to DISK

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645877#comment-16645877
 ] 

Dinesh Chitlangia commented on HDDS-619:


[~arpitagarwal] thanks for review & commit.

> hdds.db.profile should not be tagged as a required setting & should default 
> to DISK
> ---
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-619.001.patch
>
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645876#comment-16645876
 ] 

Arpit Agarwal commented on HDDS-621:


Thanks [~dineshchitlangia]. One more suggestion - instead of the hard-coding 
the setting name, can we use {{OzoneConfigKeys.OZONE_ENABLED}}?

+1 with that fixed.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645875#comment-16645875
 ] 

Dinesh Chitlangia commented on HDDS-621:


[~arpitagarwal] attached patch 002 with above changes and also fixed the 
findbug issue from previous jenkins run.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-621:
---
Attachment: HDDS-621.002.patch

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-621:
---
Description: 
A few potential improvements to genconf:
 # -Path should be optional :default to current config directory _etc/hadoop_.-
 # genconf silently overwrites existing _ozone-site.xml_. It should never do so.
 # The generated config file should have _ozone.enabled = true_.
 # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.-

  was:
A few potential improvements to genconf:
 # Path should be optional - default to current config directory _etc/hadoop_.
 # genconf silently overwrites existing _ozone-site.xml_. It should never do so.
 # The generated config file should have _ozone.enabled = true_.
 # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.-


> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645869#comment-16645869
 ] 

Dinesh Chitlangia commented on HDDS-621:


[~arpitagarwal] Sure, I think removing the default option is better as the 
HADOOP_CONF_DIR will contain a template already. I will post a new patch by 
removing the default value and thus the path will remain as required.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch
>
>
> A few potential improvements to genconf:
>  # Path should be optional - default to current config directory _etc/hadoop_.
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645842#comment-16645842
 ] 

Hadoop QA commented on HDDS-516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 15s{color} 
| {color:red} HDDS-516 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943334/HDDS-516.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1346/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-626) ozone.metadata.dirs should be tagged as REQUIRED

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-626:
--

Assignee: Dinesh Chitlangia

> ozone.metadata.dirs should be tagged as REQUIRED
> 
>
> Key: HDDS-626
> URL: https://issues.apache.org/jira/browse/HDDS-626
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> ozone.metadata.dirs is a required config but is missing the REQUIRED tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-626) ozone.metadata.dirs should be tagged as REQUIRED

2018-10-10 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-626:
--
Labels: newbie  (was: )

> ozone.metadata.dirs should be tagged as REQUIRED
> 
>
> Key: HDDS-626
> URL: https://issues.apache.org/jira/browse/HDDS-626
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> ozone.metadata.dirs is a required config but is missing the REQUIRED tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645799#comment-16645799
 ] 

Bharat Viswanadham edited comment on HDDS-516 at 10/11/18 1:20 AM:
---

Dependant on HDDS-522.
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I am using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
s3://bucket1/dir1/dir2/file

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|


was (Author: bharatviswa):
Dependant on HDDS-522.
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I a m using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
s3://bucket1/dir1/dir2/file

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-616) Collect al the robot test output and return with the right exit code

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645806#comment-16645806
 ] 

Anu Engineer commented on HDDS-616:
---

+1. I will commit this shortly. Thanks for improving the test run experience.

> Collect al the robot test output and return with the right exit code
> 
>
> Key: HDDS-616
> URL: https://issues.apache.org/jira/browse/HDDS-616
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-616.001.patch
>
>
> In the current acceptance test runner bash script 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) the output of the test 
> executions are overridden by the following test executions.
> An other problem is the exit code is always 0. In case of a failing test the 
> exit code should be non-zero at the end of the execution.
> The easiest way to fix these issues is using the rebot tool from robot 
> framework distribution. rebot is similar to the robot but instead of 
> executing tests it just render the html report from previous test output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645804#comment-16645804
 ] 

Anu Engineer commented on HDDS-601:
---

Thanks for the patch. It looks pretty good.  Thanks for taking care of this 
issue.
 # There is a checkStyle issue.
 # At least one test failure seems related, can you please take a quick look.

 

> SCMException: No such datanode
> --
>
> Key: HDDS-601
> URL: https://issues.apache.org/jira/browse/HDDS-601
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-601.001.patch
>
>
> Encountered below exception after I changed a configuration in ozone-site and 
> restarted SCM and Datanode :
> Ozone Cluster : 1 SCM, 1 OM, 3 DNs
> {code:java}
> 2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: 
> HTTP server of SCM is listening at http://0.0.0.0:9876
> 2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
> hcatest-2.openstacklocal}
> 2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> 2018-10-04 09:36:09,083 ERROR 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
> processing container report from datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
>  at 
> org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-601:
--
Affects Version/s: (was: 0.3.0)
   0.2.1
 Target Version/s: 0.3.0

> SCMException: No such datanode
> --
>
> Key: HDDS-601
> URL: https://issues.apache.org/jira/browse/HDDS-601
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-601.001.patch
>
>
> Encountered below exception after I changed a configuration in ozone-site and 
> restarted SCM and Datanode :
> Ozone Cluster : 1 SCM, 1 OM, 3 DNs
> {code:java}
> 2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: 
> HTTP server of SCM is listening at http://0.0.0.0:9876
> 2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
> hcatest-2.openstacklocal}
> 2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> 2018-10-04 09:36:09,083 ERROR 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
> processing container report from datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
>  at 
> org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645799#comment-16645799
 ] 

Bharat Viswanadham edited comment on HDDS-516 at 10/11/18 1:10 AM:
---

Dependant on HDDS-522.
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I a m using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
s3://bucket1/dir1/dir2/file

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|


was (Author: bharatviswa):
Dependant on HDDS-522.
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I a m using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/dir1/dir2/file]

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645799#comment-16645799
 ] 

Bharat Viswanadham commented on HDDS-516:
-

Dependant on HDDS-522.
 # Done
 # Done
 # When I use as you have suggested, it is not working. Not sure as my return 
is Response we need explicit conversion.
 # Done
 # Done
 # Done

Changed the signature to remove volume, and also updated the GetObject. After 
this change when I a m using cp command 

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/|s3://bucket1/dir1/dir2/file]testfile 

upload: ../../tmp/testfile to [s3://bucket1/testfile]

This working fine, but when I give path as below it is failing. Not sure of the 
problem here.

'aws s3 --endpoint-url [http://s3g:9878|http://s3g:9878/] cp /tmp/testfile 
[s3://bucket1/dir1/dir2/file]

this is failing with below error
|${output} = Completed 29 Bytes/29 Bytes with 1 file(s) remaining upload 
failed: ../../tmp/testfile to [s3://bucket1/dir1/dir2/file] An error occurred 
(405) when calling the PutObject operation: Method Not Allowed|

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-626) ozone.metadata.dirs should be tagged as a REQUIRED

2018-10-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-626:
--

 Summary: ozone.metadata.dirs should be tagged as a REQUIRED
 Key: HDDS-626
 URL: https://issues.apache.org/jira/browse/HDDS-626
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Arpit Agarwal


ozone.metadata.dirs is a required config but is missing the REQUIRED tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-626) ozone.metadata.dirs should be tagged as REQUIRED

2018-10-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-626:
---
Summary: ozone.metadata.dirs should be tagged as REQUIRED  (was: 
ozone.metadata.dirs should be tagged as a REQUIRED)

> ozone.metadata.dirs should be tagged as REQUIRED
> 
>
> Key: HDDS-626
> URL: https://issues.apache.org/jira/browse/HDDS-626
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Priority: Major
>
> ozone.metadata.dirs is a required config but is missing the REQUIRED tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-516) Implement CopyObject REST endpoint

2018-10-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-516:

Attachment: HDDS-516.03.patch

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-619) hdds.db.profile should not be tagged as a required setting & should default to DISK

2018-10-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645793#comment-16645793
 ] 

Hudson commented on HDDS-619:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15174 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15174/])
HDDS-619. hdds.db.profile should not be tagged as a required setting & (arp: 
rev 2bd000c85195416b9bcd06a3abb5100aeeda9727)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml


> hdds.db.profile should not be tagged as a required setting & should default 
> to DISK
> ---
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-619.001.patch
>
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645780#comment-16645780
 ] 

Arpit Agarwal commented on HDDS-621:


Thanks [~dineshchitlangia]. The patch looks good. 

One comment - instead of _etc/hadoop_, the default should be the value of 
_HADOOP_CONF_DIR_. If you want to leave out the default option out of this 
patch it is fine. The other changes are a good improvement too.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch
>
>
> A few potential improvements to genconf:
>  # Path should be optional - default to current config directory _etc/hadoop_.
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645769#comment-16645769
 ] 

Hadoop QA commented on HDDS-621:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-ozone/tools generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} docs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-ozone/tools |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(String)
  At GenerateOzoneRequiredConfigurations.java:rather than n in 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(String)
  At GenerateOzoneRequiredConfigurations.java:[line 133] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Updated] (HDDS-619) hdds.db.profile should not be tagged as a required setting & should default to DISK

2018-10-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-619:
---
  Resolution: Fixed
   Fix Version/s: 0.3.0
Target Version/s:   (was: 0.4.0)
  Status: Resolved  (was: Patch Available)

+1

Thanks [~dineshchitlangia]. I've committed this.

> hdds.db.profile should not be tagged as a required setting & should default 
> to DISK
> ---
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-619.001.patch
>
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-625) putKey hangs for a long time after completion, sometimes forever

2018-10-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-625:
--

 Summary: putKey hangs for a long time after completion, sometimes 
forever
 Key: HDDS-625
 URL: https://issues.apache.org/jira/browse/HDDS-625
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Arpit Agarwal


putKey hangs, sometimes forever.

TRACE log output in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-625) putKey hangs for a long time after completion, sometimes forever

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645765#comment-16645765
 ] 

Arpit Agarwal commented on HDDS-625:


Log spew
{code}
 for request: 
RaftClientReply:client-285DDCB068F0->0363759f-57c6-44b6-a27e-860ea01e2033@group-8D698BBFFC1E,
 cid=2, SUCCESS, commits[0363759f-57c6-44b6-a27e-860ea01e2033:c15] exception: 
null
2018-10-10 17:27:33 TRACE ProtobufRpcEngine:219 - 1: Call -> 
localhost/127.0.0.1:9862: commitKey {keyArgs { volumeName: "vol1" bucketName: 
"bucket1" keyName: "key3" dataSize: 1024 keyLocations { blockID { containerID: 
6 localID: 100874168054448132 } shouldCreateContainer: false offset: 0 length: 
1024 createVersion: 0 blockCommitSequenceId: 15 } } clientID: 129887144793550}
2018-10-10 17:27:33 DEBUG Client:1127 - IPC Client (1486566962) connection to 
localhost/127.0.0.1:9862 from agarwal sending #6 
org.apache.hadoop.ozone.protocol.OzoneManagerProtocol.commitKey
2018-10-10 17:27:33 DEBUG Client:1181 - IPC Client (1486566962) connection to 
localhost/127.0.0.1:9862 from agarwal got value #6
2018-10-10 17:27:33 DEBUG ProtobufRpcEngine:249 - Call: commitKey took 5ms
2018-10-10 17:27:33 TRACE ProtobufRpcEngine:287 - 1: Response <- 
localhost/127.0.0.1:9862: commitKey {status: OK}
2018-10-10 17:27:35 DEBUG TimeoutScheduler:82 - run a task: sid 0
2018-10-10 17:27:35 TRACE TimeoutScheduler:63 - Successfully ran task #0
2018-10-10 17:27:36 DEBUG TimeoutScheduler:82 - run a task: sid 1
2018-10-10 17:27:36 TRACE TimeoutScheduler:63 - Successfully ran task #1
2018-10-10 17:27:36 DEBUG TimeoutScheduler:82 - run a task: sid 2
2018-10-10 17:27:36 DEBUG TimeoutScheduler:110 - Schedule a shutdown task: 
grace 1 m, sid 3
2018-10-10 17:27:36 TRACE TimeoutScheduler:63 - Successfully ran task #2
2018-10-10 17:27:37 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:27:43 DEBUG Client:1270 - IPC Client (1486566962) connection to 
localhost/127.0.0.1:9860 from agarwal: closed
2018-10-10 17:27:43 DEBUG Client:1082 - IPC Client (1486566962) connection to 
localhost/127.0.0.1:9860 from agarwal: stopped, remaining connections 1
2018-10-10 17:27:43 DEBUG Client:1270 - IPC Client (1486566962) connection to 
localhost/127.0.0.1:9862 from agarwal: closed
2018-10-10 17:27:43 DEBUG Client:1082 - IPC Client (1486566962) connection to 
localhost/127.0.0.1:9862 from agarwal: stopped, remaining connections 0
2018-10-10 17:27:47 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:27:57 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:28:07 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:28:17 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:28:27 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:28:36 DEBUG TimeoutScheduler:123 - shutdown scheduler: sid 3
2018-10-10 17:28:36 TRACE TimeoutScheduler:63 - Successfully ran shutdown task 
#3
2018-10-10 17:28:37 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:28:47 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:28:57 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:29:07 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:29:17 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:29:27 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:29:37 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:29:47 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:29:57 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:30:07 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:30:17 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
2018-10-10 17:30:27 DEBUG SlidingWindow:176 - client-285DDCB068F0->RAFT: 
requests[]
{code}

> putKey hangs for a long time after completion, sometimes forever
> 
>
> Key: HDDS-625
> URL: https://issues.apache.org/jira/browse/HDDS-625
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Priority: Blocker
>
> putKey hangs, sometimes forever.
> TRACE log output in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-624:
-

 Summary: PutBlock fails with Unexpected Storage Container Exception
 Key: HDDS-624
 URL: https://issues.apache.org/jira/browse/HDDS-624
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
issue in RocksDBStore. To avoid that failure set the property 
_ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml

Now running Mapreduce job fails with below error
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539208750583_0005
18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539208750583_0005
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
application_1539208750583_0005
18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
uber mode : false
18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
attempt_1539208750583_0005_r_00_0, Status : FAILED
Error: java.io.IOException: Unexpected Storage Container Exception: 
java.io.IOException: Failed to command cmdType: PutBlock
traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
containerID: 2
datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
putBlock {
blockData {
blockID {
containerID: 2
localID: 100874119214399488
}
metadata {
key: "TYPE"
value: "KEY"
}
chunks {
chunkName: 
"f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
offset: 0
len: 5017
}
}
}

at 
org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:630)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.io.IOException: Failed to command cmdType: PutBlock
traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
containerID: 2
datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
putBlock {
blockData {
blockID {
containerID: 2
localID: 100874119214399488
}
metadata {
key: "TYPE"
value: "KEY"
}
chunks {
chunkName: 

[jira] [Commented] (HDDS-101) SCM CA: generate CSR for SCM CA clients

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645752#comment-16645752
 ] 

Hadoop QA commented on HDDS-101:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
21s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-hdds/common: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943322/HDDS-101-HDDS-4.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 592d8734c798 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / ddb7ea0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1344/artifact/out/diff-checkstyle-hadoop-hdds_common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1344/testReport/ |
| Max. process+thread count | 458 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1344/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM 

[jira] [Commented] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645739#comment-16645739
 ] 

Hadoop QA commented on HDDS-601:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 48s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.server.TestSCMChillModeManager |
|   | hadoop.hdds.scm.server.TestSCMDatanodeHeartbeatDispatcher |
|   | hadoop.hdds.scm.node.TestNodeManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-601 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943307/HDDS-601.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8f1d54c40893 3.13.0-139-generic 

[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-10 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645731#comment-16645731
 ] 

Xiaoyu Yao commented on HDDS-580:
-

Thanks [~ajayydv] for working on this. The patch looks good to me overall. Here 
are a few comments:

SecurityUtils.java
Line 59: NIT: keyWriter=>keyHandler

Line 64/70: should we return the Public/Private KeyPair from the create or load 
methods so that the key can be used by individual components?   

StorageContainerManager.java
Line 483:should we move this to bootstrap only during INIT and/or a separate 
INIT_SECURITY (if the SCM has been INIT without security) with more logging? We 
also need member to hold the public/private key pair returned. This way, the 
one time security init will be done explicitly instead of implicitly. 

OzoneManager.java
Line 350-357: should we move the CREATEOBJECTSTORE or INIT_SECURITY with more 
logging? This way, the one time security init will be done explicitly instead 
of implicitly. We also need member to hold the public/private key pair 
returned. 

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-550) Serialize ApplyTransaction calls per Container in ContainerStateMachine

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645728#comment-16645728
 ] 

Hadoop QA commented on HDDS-550:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 24m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
41s{color} | {color:green} root: The patch generated 0 new + 4 unchanged - 1 
fixed = 4 total (was 5) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
46s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} 

[jira] [Commented] (HDDS-619) hdds.db.profile should not be tagged as a required setting & should default to DISK

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645726#comment-16645726
 ] 

Hadoop QA commented on HDDS-619:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943313/HDDS-619.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux b7bf4e320c2d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 045069e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1342/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1342/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-10 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645715#comment-16645715
 ] 

Namit Maheshwari commented on HDDS-600:
---

So, the hadoop classpath had the ozone jars as above, but still the Mapreduce 
jobs were failing to pick up the path.

In order to proceed further added ozone plugin and ozone filesystem jars to :
 # In mapred-site.xml -> mapreduce.application.classpath property
 # In yarn-site.xml -> yarn.application.classpath property

{code:java}
/tmp/ozone-0.4.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.4.0-SNAPSHOT.jar,/tmp/ozone-0.4.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.4.0-SNAPSHOT.jar{code}
After this it was able to pick up the ozone filesytem jar.

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-621:
---
Attachment: HDDS-621.001.patch
Status: Patch Available  (was: Open)

[~arpitagarwal] attached patch 001 for your review.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-621.001.patch
>
>
> A few potential improvements to genconf:
>  # Path should be optional - default to current config directory _etc/hadoop_.
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-623) On SCM UI, Node Manager info is empty

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-623:
-

 Summary: On SCM UI, Node Manager info is empty
 Key: HDDS-623
 URL: https://issues.apache.org/jira/browse/HDDS-623
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png

Fields like below are empty

Node Manager: Minimum chill mode nodes 
Node Manager: Out-of-node chill mode 
Node Manager: Chill mode status 
Node Manager: Manual chill mode

Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-622) Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-622:
-

 Summary: Datanode shuts down with RocksDBStore 
java.lang.NoSuchMethodError
 Key: HDDS-622
 URL: https://issues.apache.org/jira/browse/HDDS-622
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Datanodes are registered fine on a Hadoop + Ozone cluster.

While running jobs against Ozone, Datanode shuts down as below:
{code:java}
2018-10-10 21:50:42,708 INFO storage.RaftLogWorker 
(RaftLogWorker.java:rollLogSegment(263)) - Rolling 
segment:7c1a32b5-34ed-4a2a-aa07-ac75d25858b6-RaftLogWorker index to:2
2018-10-10 21:50:42,714 INFO impl.RaftServerImpl 
(ServerState.java:setRaftConf(319)) - 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: set 
configuration 2: [7c1a32b5-34ed-4a2a-aa07-ac75d25858b6:172.27.56.9:9858, ee
20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858, 
b7fbd501-27ae-4304-8c42-a612915094c6:172.27.17.133:9858], old=null at 2
2018-10-10 21:50:42,729 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
2018-10-10 21:50:43,245 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
2018-10-10 21:50:43,310 ERROR impl.RaftServerImpl 
(RaftServerImpl.java:applyLogToStateMachine(1153)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: applyTransaction failed for index:1 
proto:(t:2, i:1)SMLOGENTRY,,
client-894EC0846FDF, cid=0
2018-10-10 21:50:43,313 ERROR impl.StateMachineUpdater 
(ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
StateMachineUpdater-7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: the 
StateMachineUpdater hii
ts Throwable
java.lang.NoSuchMethodError: 
org.apache.hadoop.metrics2.util.MBeans.register(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;Ljava/lang/Object;)Ljavax/management/ObjectName;
at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:74)
at 
org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:142)
at 
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:78)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:133)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:256)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:179)
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:223)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:229)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.access$300(ContainerStateMachine.java:115)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.handleCreateContainer(ContainerStateMachine.java:618)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.executeContainerCommand(ContainerStateMachine.java:642)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:396)
at 
org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1150)
at 
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
at java.lang.Thread.run(Thread.java:748)
2018-10-10 21:50:43,320 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at 
ctr-e138-1518143905142-510793-01-02.hwx.site/172.27.56.9
/
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-101) SCM CA: generate CSR for SCM CA clients

2018-10-10 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-101:

Attachment: HDDS-101-HDDS-4.003.patch

> SCM CA: generate CSR for SCM CA clients
> ---
>
> Key: HDDS-101
> URL: https://issues.apache.org/jira/browse/HDDS-101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-101-HDDS-4-002.patch, HDDS-101-HDDS-4.001.patch, 
> HDDS-101-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-373) Ozone genconf tool must generate ozone-site.xml with sample values instead of a template

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-373:
---
Description: 
As discussed with [~anu], currently, the genconf tool generates a template 
ozone-site.xml. This is not very useful for new users as they would have to 
understand what values should be set for the minimal configuration properties.

This Jira proposes to modify the ozone-default.xml which is leveraged by 
genconf tool to generate ozone-site.xml

 

Further, as suggested by [~arpitagarwal], we must add a {{--pseudo}} option to 
generate configs for starting pseudo-cluster. This should be useful for quick 
dev-testing.

  was:
As discussed with [~anu], currently, the genconf tool generates a template 
ozone-site.xml. This is not very useful for new users as they would have to 
understand what values should be set for the minimal configuration properties.

This Jira proposes to modify the ozone-default.xml which is leveraged by 
genconf tool to generate ozone-site.xml


> Ozone genconf tool must generate ozone-site.xml with sample values instead of 
> a template
> 
>
> Key: HDDS-373
> URL: https://issues.apache.org/jira/browse/HDDS-373
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-373.001.patch
>
>
> As discussed with [~anu], currently, the genconf tool generates a template 
> ozone-site.xml. This is not very useful for new users as they would have to 
> understand what values should be set for the minimal configuration properties.
> This Jira proposes to modify the ozone-default.xml which is leveraged by 
> genconf tool to generate ozone-site.xml
>  
> Further, as suggested by [~arpitagarwal], we must add a {{--pseudo}} option 
> to generate configs for starting pseudo-cluster. This should be useful for 
> quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-621:
---
Description: 
A few potential improvements to genconf:
 # Path should be optional - default to current config directory _etc/hadoop_.
 # genconf silently overwrites existing _ozone-site.xml_. It should never do so.
 # The generated config file should have _ozone.enabled = true_.
 # -Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.-

  was:
A few potential improvements to genconf:
# Path should be optional - default to current config directory _etc/hadoop_.
# genconf silently overwrites existing _ozone-site.xml_. It should never do so. 
# The generated config file should have _ozone.enabled = true_.
# Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.



> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
>  # Path should be optional - default to current config directory _etc/hadoop_.
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{--pseudo}} option to generate configs for starting 
> pseudo-cluster. This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-621:
---
Description: 
A few potential improvements to genconf:
 # Path should be optional - default to current config directory _etc/hadoop_.
 # genconf silently overwrites existing _ozone-site.xml_. It should never do so.
 # The generated config file should have _ozone.enabled = true_.
 # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.-

  was:
A few potential improvements to genconf:
 # Path should be optional - default to current config directory _etc/hadoop_.
 # genconf silently overwrites existing _ozone-site.xml_. It should never do so.
 # The generated config file should have _ozone.enabled = true_.
 # -Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.-


> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
>  # Path should be optional - default to current config directory _etc/hadoop_.
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645648#comment-16645648
 ] 

Hadoop QA commented on HDDS-439:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 43s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | hadoop.ozone.TestMiniOzoneCluster |
|   | hadoop.hdds.scm.pipeline.TestNodeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-439 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943309/HDDS-439.003.patch |
| Optional Tests |  

[jira] [Commented] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645634#comment-16645634
 ] 

Hadoop QA commented on HDFS-13878:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 443 unchanged - 0 fixed = 444 total (was 443) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
11s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13878 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943308/HDFS-13878.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8308d2049876 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bf3d591 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25251/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25251/testReport/ |
| Max. process+thread count | 650 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25251/console |
| Powered by | 

[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645627#comment-16645627
 ] 

Arpit Agarwal commented on HDDS-621:


Sure, fine to push that part out.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-13976.

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks [~lukmajercak] for the backport.
Committed to branch-2.9.

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-12813.branch-2.001.patch, 
> HDFS-12813.branch-2.9.001.patch, TestRequestHedgingProxyProvider.png
>
>
> 2.9 also shows the issue from HDFS-12813:
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645624#comment-16645624
 ] 

Dinesh Chitlangia edited comment on HDDS-621 at 10/10/18 10:20 PM:
---

[~arpitagarwal] sounds good. 
{quote}Have a {{--pseudo}} option to generate configs for starting 
pseudo-cluster. This should be useful for quick dev-testing.
{quote}
Is it fine to push this improvement in HDDS-373 which aims to generate template 
with values? It doesn't propose an option like --pseudo. Might be good to 
address it there. What do you think?


was (Author: dineshchitlangia):
[~arpitagarwal] sounds good. 
{quote}Have a {{--pseudo}} option to generate configs for starting 
pseudo-cluster. This should be useful for quick dev-testing.
{quote}
Is it fine to push this improvement in HDDS-373 which aims to generate template 
with values. It doesn't propose an option like --pseudo. Might be good to 
address it there. What do you think?

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645624#comment-16645624
 ] 

Dinesh Chitlangia commented on HDDS-621:


[~arpitagarwal] sounds good. 
{quote}Have a {{--pseudo}} option to generate configs for starting 
pseudo-cluster. This should be useful for quick dev-testing.
{quote}
Is it fine to push this improvement in HDDS-373 which aims to generate template 
with values. It doesn't propose an option like --pseudo. Might be good to 
address it there. What do you think?

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-10 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645620#comment-16645620
 ] 

Íñigo Goiri commented on HDFS-13976:


+1 on  [^HDFS-12813.branch-2.9.001.patch].
Committing to branch-2.9.

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-12813.branch-2.001.patch, 
> HDFS-12813.branch-2.9.001.patch, TestRequestHedgingProxyProvider.png
>
>
> 2.9 also shows the issue from HDFS-12813:
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-619) hdds.db.profile should not be tagged as a required setting & should default to DISK

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-619:
---
Attachment: HDDS-619.001.patch
Status: Patch Available  (was: Open)

> hdds.db.profile should not be tagged as a required setting & should default 
> to DISK
> ---
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-619.001.patch
>
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-619) hdds.db.profile should not be tagged as a required setting & should default to DISK

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-619:
---
Summary: hdds.db.profile should not be tagged as a required setting & 
should default to DISK  (was: hdds.db.profile should not be tagged as a 
required setting)

> hdds.db.profile should not be tagged as a required setting & should default 
> to DISK
> ---
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-619) hdds.db.profile should not be tagged as a required setting

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645605#comment-16645605
 ] 

Arpit Agarwal commented on HDDS-619:


Correct.

> hdds.db.profile should not be tagged as a required setting
> --
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-619) hdds.db.profile should not be tagged as a required setting

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645602#comment-16645602
 ] 

Dinesh Chitlangia commented on HDDS-619:


[~arpitagarwal] Thanks for filing the issue. We would like to make it default 
to DISK and remove the REQUIRED tag.

Please correct me if I am wrong.

> hdds.db.profile should not be tagged as a required setting
> --
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645601#comment-16645601
 ] 

Arpit Agarwal commented on HDDS-621:


I think the simplest approach (just fail) is fine.

We want it to do the right thing in the common case when there is no config 
file. Appending a timestamp will add one more rename step.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-540) Unblock certain SCM client APIs from SCM#checkAdminAccess

2018-10-10 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645597#comment-16645597
 ] 

Xiaoyu Yao commented on HDDS-540:
-

The original issue is now temporarily unblocked with HDDS-614, we will revisit 
this when the admin check is done on HDDS-4 branch. 

> Unblock certain SCM client APIs from SCM#checkAdminAccess
> -
>
> Key: HDDS-540
> URL: https://issues.apache.org/jira/browse/HDDS-540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Currently most of SCM Client APIs has been guarded with checkAdminAccess. 
> This ticket is opened to unblock non-admin client from accessing SCM 
> container/pipeline during block allocation. 
>  
> {code}
> scm_1           | 2018-09-22 02:52:32 INFO  Server:2726 - IPC Server handler 
> 5 on 9860, call Call#4 Retry#0 
> org.apache.hadoop.ozone.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:34101
> scm_1           | java.io.IOException: Access denied for user 
> testuser/datan...@example.com. Superuser privilege is required.
> scm_1           | at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:867)
> scm_1           | at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> scm_1           | at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:120)
> scm_1           | at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:10790)
> scm_1           | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> scm_1           | at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> scm_1           | at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> scm_1           | at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> scm_1           | at java.security.AccessController.doPrivileged(Native 
> Method)
> scm_1           | at javax.security.auth.Subject.doAs(Subject.java:422)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> scm_1           | at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-601:

Status: Patch Available  (was: Open)

> SCMException: No such datanode
> --
>
> Key: HDDS-601
> URL: https://issues.apache.org/jira/browse/HDDS-601
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-601.001.patch
>
>
> Encountered below exception after I changed a configuration in ozone-site and 
> restarted SCM and Datanode :
> Ozone Cluster : 1 SCM, 1 OM, 3 DNs
> {code:java}
> 2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: 
> HTTP server of SCM is listening at http://0.0.0.0:9876
> 2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
> hcatest-2.openstacklocal}
> 2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> 2018-10-04 09:36:09,083 ERROR 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
> processing container report from datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
>  at 
> org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645593#comment-16645593
 ] 

Dinesh Chitlangia edited comment on HDDS-621 at 10/10/18 9:58 PM:
--

[~arpitagarwal] Thanks for suggested improvements.
{quote}genconf silently overwrites existing _ozone-site.xml_. It should never 
do so.
{quote}
Would you rather prefer:
 # We avoid generating the file and let user know that a file already exists OR
 # Generate the file with name .xml_n where n is timestamp


was (Author: dineshchitlangia):
[~arpitagarwal] Thanks for suggested improvements.
{quote}genconf silently overwrites existing _ozone-site.xml_. It should never 
do so.
{quote}
Would you rather prefer:
 # We avoid generating the file and let user know that a file already exists OR
 # Generate the file with name _n.xml where n is timestamp

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645595#comment-16645595
 ] 

Hanisha Koneru commented on HDDS-601:
-

I have opened a Jira HDDS-618 to track decoupling registration and heartbeat in 
DN so that reregistration can be faster.

 

> SCMException: No such datanode
> --
>
> Key: HDDS-601
> URL: https://issues.apache.org/jira/browse/HDDS-601
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-601.001.patch
>
>
> Encountered below exception after I changed a configuration in ozone-site and 
> restarted SCM and Datanode :
> Ozone Cluster : 1 SCM, 1 OM, 3 DNs
> {code:java}
> 2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: 
> HTTP server of SCM is listening at http://0.0.0.0:9876
> 2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
> hcatest-2.openstacklocal}
> 2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> 2018-10-04 09:36:09,083 ERROR 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
> processing container report from datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
>  at 
> org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645593#comment-16645593
 ] 

Dinesh Chitlangia commented on HDDS-621:


[~arpitagarwal] Thanks for suggested improvements.
{quote}genconf silently overwrites existing _ozone-site.xml_. It should never 
do so.
{quote}
Would you rather prefer:
 # We avoid generating the file and let user know that a file already exists OR
 # Generate the file with name _n.xml where n is timestamp

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-10-10 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645590#comment-16645590
 ] 

Daryn Sharp commented on HDFS-13697:


Let me dig out my interrupted patch because the posted patch has a few fatal 
flaws.
 # There must be no conditionals regarding the stored ugi. The current user is 
the current user, regardless of token, and it's never the login user.
 # It's also not performing the entire request inside the correct doAs context. 
All processing must be within the doAs or unexpected authentication may happen 
with the wrong identity.
 # If all tests are passing, the patch is flawed. I recall the tests codified 
bugs.

I'm glad you noted HADOOP-10771.  I ripped auth url out of webhdfs after it 
caused never ending auth issues.  I documented the auth url issues years ago 
but bowed to offline pressure to allow that completely broken atrocity to be 
integrated because "it wouldn't affect me" and I was needed by some critical 
feature – turned out to be the KMS and did affect me...

Anyway, I don't think everything can be fixed at once, but the kms client needs 
to be done correctly.

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch, HDFS-13697.07.patch, HDFS-13697.08.patch, 
> HDFS-13697.09.patch, HDFS-13697.10.patch, HDFS-13697.11.patch, 
> HDFS-13697.12.patch, HDFS-13697.prelim.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
> [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
> [encrypted_key]!!
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  at 
> 

[jira] [Assigned] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-621:
--

Assignee: Dinesh Chitlangia

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-621:
---
Description: 
A few potential improvements to genconf:
# Path should be optional - default to current config directory _etc/hadoop_.
# genconf silently overwrites existing _ozone-site.xml_. It should never do so. 
# The generated config file should have _ozone.enabled = true_.
# Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.


  was:
A few potential improvements to genconf:
# Path should be optional - default to current config directory _etc/hadoop_.
# genconf silently overwrites existing config files. It should never do so. 
# The generated config file should have _ozone.enabled = true_.
# Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.



> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-621:
---
Description: 
A few potential improvements to genconf:
# Path should be optional - default to current config directory _etc/hadoop_.
# genconf silently overwrites existing config files. It should never do so. 
# The generated config file should have _ozone.enabled = true_.
# Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.


  was:
A few potential improvements to genconf:
# Path should be optional - default to current dir.
# genconf silently overwrites existing config files. It should never do so. 
# The generated config file should have _ozone.enabled = true_.
# Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.



> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current config directory _etc/hadoop_.
> # genconf silently overwrites existing config files. It should never do so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-621:
---
Description: 
A few potential improvements to genconf:
# Path should be optional - default to current dir.
# genconf silently overwrites existing config files. It should never do so. 
# The generated config file should have _ozone.enabled = true_.
# Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
This should be useful for quick dev-testing.


  was:
A few potential improvements to genconf:
# Path should be optional - default to current dir.
# genconf silently overwrites existing config files. It should never do so. 
# The generated config file should have _ozone.enabled = true_.



> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> A few potential improvements to genconf:
> # Path should be optional - default to current dir.
> # genconf silently overwrites existing config files. It should never do so. 
> # The generated config file should have _ozone.enabled = true_.
> # Have a {{--pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-621) ozone genconf improvements

2018-10-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-621:
--

 Summary: ozone genconf improvements
 Key: HDDS-621
 URL: https://issues.apache.org/jira/browse/HDDS-621
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Affects Versions: 0.4.0
Reporter: Arpit Agarwal


A few potential improvements to genconf:
# Path should be optional - default to current dir.
# genconf silently overwrites existing config files. It should never do so. 
# The generated config file should have _ozone.enabled = true_.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-619) hdds.db.profile should not be tagged as a required setting

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-619:
--

Assignee: Dinesh Chitlangia

> hdds.db.profile should not be tagged as a required setting
> --
>
> Key: HDDS-619
> URL: https://issues.apache.org/jira/browse/HDDS-619
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> hdds.db.profile is tagged as a required setting and defaults to SSD. It 
> should default to DISK instead.
> {code:java}
> 
>   hdds.db.profile
>   SSD
>   OZONE, OM, PERFORMANCE, REQUIRED
>   
> This property allows user to pick a configuration
> that tunes the RocksDB settings for the hardware it is running
> on. Right now, we have SSD and DISK as profile options.
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-620) ozone.scm.client.address should be an optional setting

2018-10-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-620:
--

 Summary: ozone.scm.client.address should be an optional setting
 Key: HDDS-620
 URL: https://issues.apache.org/jira/browse/HDDS-620
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


{{ozone.scm.client.address}} should be an optional setting. Clients can 
fallback to {{ozone.scm.names}} if the former is unspecified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-619) hdds.db.profile should not be tagged as a required setting

2018-10-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-619:
--

 Summary: hdds.db.profile should not be tagged as a required setting
 Key: HDDS-619
 URL: https://issues.apache.org/jira/browse/HDDS-619
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


hdds.db.profile is tagged as a required setting and defaults to SSD. It should 
default to DISK instead.
{code:java}

  hdds.db.profile
  SSD
  OZONE, OM, PERFORMANCE, REQUIRED
  
This property allows user to pick a configuration
that tunes the RocksDB settings for the hardware it is running
on. Right now, we have SSD and DISK as profile options.
  
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-439:
---
Attachment: HDDS-439.003.patch
Status: Patch Available  (was: Open)

[~arpitagarwal] Attached patch 003 to address checkstyle violations.

Test failures are unrelated.

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Blocker
>  Labels: newbie
> Attachments: HDDS-439.001.patch, HDDS-439.002.patch, 
> HDDS-439.003.patch
>
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-10-10 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-439:
---
Status: Open  (was: Patch Available)

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Blocker
>  Labels: newbie
> Attachments: HDDS-439.001.patch, HDDS-439.002.patch
>
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-10 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645571#comment-16645571
 ] 

Siyao Meng commented on HDFS-13878:
---

[~ljain] Thanks for the review! Uploaded patch rev 004.

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch, HDFS-13878.004.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-618) Separate DN registration from Heartbeat

2018-10-10 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-618:
---

 Summary: Separate DN registration from Heartbeat
 Key: HDDS-618
 URL: https://issues.apache.org/jira/browse/HDDS-618
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru


Currently, if SCM has to send ReRegister command to a DN, it can only do so 
through heartbeat response. Due to this, DN reregistration can take upto 2 
heartbeat intervals. 

We should decouple registration requests from heartbeat, so that DN can 
reregister as soon as SCM detects that the node is not registered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-10-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645564#comment-16645564
 ] 

Hadoop QA commented on HDDS-439:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 37s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | hadoop.hdds.scm.pipeline.TestNodeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-439 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-10 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13878:
--
Attachment: HDFS-13878.004.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch, HDFS-13878.004.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-10 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13878:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch, HDFS-13878.004.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2018-10-10 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645551#comment-16645551
 ] 

Konstantin Shvachko commented on HDFS-13977:


To add to [~xkrogen]'s description. There are two ways how this huge batch of 
edits can be accumulated:
# On NN restart while recovering expired leases for unclosed files. When NN 
dies and there is a lot of files being actively written to, then on a restart, 
that took longer than 1 hour, all the leases will expire and NN will close 
them, and batch the closing transactions together.
# During normal operations, if QJM looses quorum, NN will not be able to write 
transactions to the journal, and will keep accumulating them in the buffer. 
Once the quorum is restored the accumulated large batch will be sent to QJM.

The first case can be solved by forcing the DoubleBuffer structure rotate at a 
certain shorter buffer length. In the second case the transactions are nowhere 
to be sent, so NN should just stall and reject subsequent write request. It 
would be good to keep NN from crashing in the case, because the transactions 
have not been persisted yet, although it is not a FS consistency matter, 
because clients were not notified.

> NameNode can kill itself if it tries to send too many txns to a QJM 
> simultaneously
> --
>
> Key: HDFS-13977
> URL: https://issues.apache.org/jira/browse/HDFS-13977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.7
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> h3. Problem & Logs
> We recently encountered an issue on a large cluster (running 2.7.4) in which 
> the NameNode killed itself because it was unable to communicate with the JNs 
> via QJM. We discovered that it was the result of the NameNode trying to send 
> a huge batch of over 1 million transactions to the JNs in a single RPC:
> {code:title=NameNode Logs}
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote 
> journal X.X.X.X: failed to
>  write txns 1000-11153636. Will try to write to this JN again after the 
> next log roll.
> ...
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1098ms 
> to send a batch of 1153637 edits (335886611 bytes) to remote journal 
> X.X.X.X:
> {code}
> {code:title=JournalNode Logs}
> INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8485: 
> readAndProcess from client X.X.X.X threw exception [java.io.IOException: 
> Requested data length 335886776 is longer than maximum configured RPC length 
> 67108864.  RPC came from X.X.X.X]
> java.io.IOException: Requested data length 335886776 is longer than maximum 
> configured RPC length 67108864.  RPC came from X.X.X.X
> at 
> org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1610)
> at 
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1672)
> at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:897)
> at 
> org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:753)
> at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724)
> {code}
> The JournalNodes rejected the RPC because it had a size well over the 64MB 
> default {{ipc.maximum.data.length}}.
> This was triggered by a huge number of files all hitting a hard lease timeout 
> simultaneously, causing the NN to force-close them all at once. This can be a 
> particularly nasty bug as the NN will attempt to re-send this same huge RPC 
> on restart, as it loads an fsimage which still has all of these open files 
> that need to be force-closed.
> h3. Proposed Solution
> To solve this we propose to modify {{EditsDoubleBuffer}} to add a "hard 
> limit" based on the value of {{ipc.maximum.data.length}}. When {{writeOp()}} 
> or {{writeRaw()}} is called, first check the size of {{bufCurrent}}. If it 
> exceeds the hard limit, block the writer until the buffer is flipped and 
> {{bufCurrent}} becomes {{bufReady}}. This gives some self-throttling to 
> prevent the NameNode from killing itself in this way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645530#comment-16645530
 ] 

Hanisha Koneru commented on HDDS-601:
-

Thanks [~ssulav].

When an SCM restarts and gets a heartbeat from an unregistered DN, it asks the 
DN to re-register. But in the meanwhile it also tries processing the reports 
from this unregistered DN (resulting in the error seen above). Posting a patch 
to fix this - SCM will only try to process reports from registered DNs.

On the Datanode side, after it receives a reregister command from SCM, it sends 
the registration request only in the next heartbeat. So, after an SCM restarts, 
it might take upto _2 heartbeatFrequency_ before the DN reregisters. 

> SCMException: No such datanode
> --
>
> Key: HDDS-601
> URL: https://issues.apache.org/jira/browse/HDDS-601
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-601.001.patch
>
>
> Encountered below exception after I changed a configuration in ozone-site and 
> restarted SCM and Datanode :
> Ozone Cluster : 1 SCM, 1 OM, 3 DNs
> {code:java}
> 2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: 
> HTTP server of SCM is listening at http://0.0.0.0:9876
> 2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
> hcatest-2.openstacklocal}
> 2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> 2018-10-04 09:36:09,083 ERROR 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
> processing container report from datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
>  at 
> org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-601) SCMException: No such datanode

2018-10-10 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-601:

Attachment: HDDS-601.001.patch

> SCMException: No such datanode
> --
>
> Key: HDDS-601
> URL: https://issues.apache.org/jira/browse/HDDS-601
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-601.001.patch
>
>
> Encountered below exception after I changed a configuration in ozone-site and 
> restarted SCM and Datanode :
> Ozone Cluster : 1 SCM, 1 OM, 3 DNs
> {code:java}
> 2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: 
> HTTP server of SCM is listening at http://0.0.0.0:9876
> 2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
> hcatest-2.openstacklocal}
> 2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
> SCM receive heartbeat from unregistered datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> 2018-10-04 09:36:09,083 ERROR 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
> processing container report from datanode 
> 82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
> hcatest-3.openstacklocal}
> org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
>  at 
> org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-550) Serialize ApplyTransaction calls per Container in ContainerStateMachine

2018-10-10 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645506#comment-16645506
 ] 

Shashikant Banerjee commented on HDDS-550:
--

Thanks [~elek] for the confirmation.

As per offline discussion with [~jnp], removed the ExecutorMap in 
XceiverServerRatis with a list. Fixed the CloseContainerHandlingByClient tests 
and removed the TestContainerStateMachine tests which were basically verifying 
the synchronization between write chunks, putBlock and CloseContainer as with 
this Jira, there won' be any such synchronization required.

Other test failures are not related.

> Serialize ApplyTransaction calls per Container in ContainerStateMachine
> ---
>
> Key: HDDS-550
> URL: https://issues.apache.org/jira/browse/HDDS-550
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-550.001.patch, HDDS-550.002.patch, 
> HDDS-550.003.patch, HDDS-550.004.patch, HDDS-550.005.patch, HDDS-550.006.patch
>
>
> As part of handling Node failures in Ozone, the block commit need to happen 
> in order inside ContainerStateMachine per container. With RATIS-341, it is 
> guaranteed that the  applyTransaction calls for committing the write chunks 
> will be initiated only when the WriteStateMachine data for write Chunk 
> operations finish. 
> This Jira is aimed at making all the applyTransaction operations inside 
> ContainerStateMachine serial per container with a single thread Executor per 
> container handling all applyTransactions calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-550) Serialize ApplyTransaction calls per Container in ContainerStateMachine

2018-10-10 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645506#comment-16645506
 ] 

Shashikant Banerjee edited comment on HDDS-550 at 10/10/18 8:29 PM:


Thanks [~elek] for the confirmation.

As per offline discussion with [~jnp], removed the ExecutorMap in 
XceiverServerRatis with a list in patch v6. Fixed the 
CloseContainerHandlingByClient tests and removed the TestContainerStateMachine 
tests which were basically verifying the synchronization between write chunks, 
putBlock and CloseContainer as with this Jira, there won' be any such 
synchronization required.

Other test failures are not related.


was (Author: shashikant):
Thanks [~elek] for the confirmation.

As per offline discussion with [~jnp], removed the ExecutorMap in 
XceiverServerRatis with a list. Fixed the CloseContainerHandlingByClient tests 
and removed the TestContainerStateMachine tests which were basically verifying 
the synchronization between write chunks, putBlock and CloseContainer as with 
this Jira, there won' be any such synchronization required.

Other test failures are not related.

> Serialize ApplyTransaction calls per Container in ContainerStateMachine
> ---
>
> Key: HDDS-550
> URL: https://issues.apache.org/jira/browse/HDDS-550
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-550.001.patch, HDDS-550.002.patch, 
> HDDS-550.003.patch, HDDS-550.004.patch, HDDS-550.005.patch, HDDS-550.006.patch
>
>
> As part of handling Node failures in Ozone, the block commit need to happen 
> in order inside ContainerStateMachine per container. With RATIS-341, it is 
> guaranteed that the  applyTransaction calls for committing the write chunks 
> will be initiated only when the WriteStateMachine data for write Chunk 
> operations finish. 
> This Jira is aimed at making all the applyTransaction operations inside 
> ContainerStateMachine serial per container with a single thread Executor per 
> container handling all applyTransactions calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >