[jira] [Commented] (HDDS-519) Implement ListBucket REST endpoint

2018-10-11 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646289#comment-16646289
 ] 

LiXin Ge commented on HDDS-519:
---

Test in my local, seems work well in the nomal case, but I'm not sure whether 
should we throw the exception xml when operate in a volume that doesn't exist.

{noformat}
[root@EC130 bin]# curl -X GET http://127.0.0.1:9878/bad


  NoSuchVolume
  The specified volume does not exist
  Volume
  27b329eb-66c8-4685-b002-ed57a35dbbc0

{noformat}

> Implement ListBucket REST endpoint
> --
>
> Key: HDDS-519
> URL: https://issues.apache.org/jira/browse/HDDS-519
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-519.000.patch
>
>
> You can also name it as GetService.
> See te AWS reference:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html
> The List Bucket API needs the call to be handled at the root resource 
> (“/{volume}”).  
>  
> This implementation of the GET operation returns a list of all buckets owned 
> by the authenticated sender of the request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-519) Implement ListBucket REST endpoint

2018-10-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-519:
--
Attachment: (was: image-2018-10-11-19-01-26-516.png)

> Implement ListBucket REST endpoint
> --
>
> Key: HDDS-519
> URL: https://issues.apache.org/jira/browse/HDDS-519
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-519.000.patch
>
>
> You can also name it as GetService.
> See te AWS reference:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html
> The List Bucket API needs the call to be handled at the root resource 
> (“/{volume}”).  
>  
> This implementation of the GET operation returns a list of all buckets owned 
> by the authenticated sender of the request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-519) Implement ListBucket REST endpoint

2018-10-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-519:
--
Attachment: image-2018-10-11-19-01-26-516.png

> Implement ListBucket REST endpoint
> --
>
> Key: HDDS-519
> URL: https://issues.apache.org/jira/browse/HDDS-519
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-519.000.patch
>
>
> You can also name it as GetService.
> See te AWS reference:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html
> The List Bucket API needs the call to be handled at the root resource 
> (“/{volume}”).  
>  
> This implementation of the GET operation returns a list of all buckets owned 
> by the authenticated sender of the request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-519) Implement ListBucket REST endpoint

2018-10-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-519:
--
Attachment: HDDS-519.000.patch

> Implement ListBucket REST endpoint
> --
>
> Key: HDDS-519
> URL: https://issues.apache.org/jira/browse/HDDS-519
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-519.000.patch
>
>
> You can also name it as GetService.
> See te AWS reference:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html
> The List Bucket API needs the call to be handled at the root resource 
> (“/{volume}”).  
>  
> This implementation of the GET operation returns a list of all buckets owned 
> by the authenticated sender of the request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-519) Implement ListBucket REST endpoint

2018-10-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-519:
--
Status: Patch Available  (was: Open)

> Implement ListBucket REST endpoint
> --
>
> Key: HDDS-519
> URL: https://issues.apache.org/jira/browse/HDDS-519
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-519.000.patch
>
>
> You can also name it as GetService.
> See te AWS reference:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html
> The List Bucket API needs the call to be handled at the root resource 
> (“/{volume}”).  
>  
> This implementation of the GET operation returns a list of all buckets owned 
> by the authenticated sender of the request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-11 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646276#comment-16646276
 ] 

LiXin Ge commented on HDDS-517:
---

Thanks very much for reviews from [~elek] and [~bharatviswa]. Sorry for the 
delay response here before and hopes I haven't blocked any issue.

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Patch Available  (was: Open)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644430#comment-16644430
 ] 

LiXin Ge commented on HDDS-517:
---

Thanks [~bharatviswa] for reviewing this. fixed version patch 002 is uploaded.
 > 1. This patch needs to be rebased on top of trunk.
 Done.

> 2. The setting of x-amz-request-id is not required
 Done.

> 3. Why do we need this check .header("Content-Length", body == null ? 0 : 
> length), and also why do we need OutputStream for HEADObject?
 Actually it's from [~elek] advice: {{3. I think Content-Length should be 0 in 
case of missing body.}}. I'm both OK with keep the OutputStream or not, patch 
002 removed the OutputStream, but that's better to hear from [~elek] if [~elek] 
has some other consideration.

> 4. Few observations is content type and content length returned for us is zero
 Done.

> 5. And also I think no need to add x-amz-version-id by default
 Done

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Open  (was: Patch Available)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Attachment: HDDS-517.002.patch

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-519) Implement ListBucket REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge reassigned HDDS-519:
-

Assignee: LiXin Ge

> Implement ListBucket REST endpoint
> --
>
> Key: HDDS-519
> URL: https://issues.apache.org/jira/browse/HDDS-519
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
>
> You can also name it as GetService.
> See te AWS reference:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html
> The List Bucket API needs the call to be handled at the root resource 
> (“/{volume}”).  
>  
> This implementation of the GET operation returns a list of all buckets owned 
> by the authenticated sender of the request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643003#comment-16643003
 ] 

LiXin Ge commented on HDDS-517:
---

Sorry for the late response here, [~elek], I just come back from holidays, and 
thanks for your review and further demonstrate.
bq. It's not clear why do you need the s3ResMeta
It's just a try to make it act more like real AWS. However, to be honest, it's 
of small significance as you said, I have removed the s3ResMeta in patch 001.
bq.Currently I didn't get 404 in in case of missing key.
Patch 001 fixed this, I can see the 404 in my local as below now:
{noformat}
[root@EC130 bin]# curl -v -I  http://127.0.0.1:9878/vol1/bucket/key2
* About to connect() to 127.0.0.1 port 9878 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 9878 (#0)
> HEAD /vol1/bucket/key2 HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.9.0 
> zlib/1.2.3 libidn/1.18 libssh2/1.2.2
> Host: 127.0.0.1:9878
> Accept: */*
>
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< Date: Tue, 09 Oct 2018 09:12:08 GMT
Date: Tue, 09 Oct 2018 09:12:08 GMT
{noformat}

[~bharatviswa] thanks for your reminding, I have rebased patch 001 based on 
your nice work.

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Attachment: HDDS-517.001.patch

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Patch Available  (was: Open)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Open  (was: Patch Available)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-09-30 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Patch Available  (was: Open)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-09-30 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633326#comment-16633326
 ] 

LiXin Ge commented on HDDS-517:
---

Upload the initial patch:
1. The treatment of exceptions is temporary, will come back to optimize it 
after HDDS-560 is merged.
2.Range retrieval requests is not fully supported in this patch, because our 
ClientProtocol of Ozone haven't support range read.

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-09-30 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Attachment: HDDS-517.000.patch

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-28 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632710#comment-16632710
 ] 

LiXin Ge commented on HDDS-448:
---

[~ajayydv] Thanks for your help to review and commit this.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-517) Implement HeadObject REST endpoint

2018-09-28 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge reassigned HDDS-517:
-

Assignee: LiXin Ge

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-27 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631382#comment-16631382
 ] 

LiXin Ge commented on HDDS-401:
---

Thanks [~ajayydv] for reviewing and committing this.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

Patch 006 is rebased base on HDDS-401.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.006.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629793#comment-16629793
 ] 

LiXin Ge commented on HDDS-448:
---

{noformat}
Shall we return null from catch clause and update the javadoc return statement 
as well?
{noformat}
Sure, I'm done in the 005 patch, thanks [~ajayydv] for your review.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, HDDS-448.005.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-26 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, HDDS-448.005.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-26 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, HDDS-448.005.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-26 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.005.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, HDDS-448.005.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-368) all tests in TestOzoneRestClient failed due to "zh_CN" OS language

2018-09-26 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628324#comment-16628324
 ] 

LiXin Ge commented on HDDS-368:
---

Hi [~szetszwo], sorry for the late response here.
bq.Could you also try updating it? Mine is 1.8.0_172.
I have updated my java version to the latest 1.8.0_181, but it didn't work, the 
test fails by the same reason:
{noformat}
hadoop@hadoop:/home/glx/hadoop$ mvn -v
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 
2018-06-18T02:33:14+08:00)
Maven home: /home/maven/apache-maven-3.5.4
Java version: 1.8.0_181, vendor: Oracle Corporation, runtime: 
/home/hadoop/liuhua/hadoop/jdk1.8.0_181/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "4.4.0-116-generic", arch: "amd64", family: "unix"
{noformat}



bq.Do you see a way to fix it?
Sorry I haven't find a way to fix it. The clue is when a "string" is in 
response data and the "string".length() is 10 shorter than 
"string".getBytes().length, the response data will get 10 bytes lost when it 
arrive the client. Seems close to the truth but I'm not HTTP/Rest expert.

> all tests in TestOzoneRestClient failed due to "zh_CN" OS language
> --
>
> Key: HDDS-368
> URL: https://issues.apache.org/jira/browse/HDDS-368
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: LiXin Ge
>Priority: Critical
>  Labels: alpha2
>
> OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)
> java version: 1.8.0_111
> mvn: Apache Maven 3.3.9
> Default locale: zh_CN, platform encoding: UTF-8
> Test command: mvn test -Dtest=TestOzoneRestClient -Phdds
>  
>  All the tests in TestOzoneRestClient failed in my local machine with 
> exception like below, does it mean anybody who have runtime environment like 
> me can't run the Ozone Rest test now?
> {noformat}
> [ERROR] 
> testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) 
> Time elapsed: 0.01 s <<< ERROR!
> java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
> Unparseable date: "m, 28 1970 19:23:50 GMT"
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>  at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
>  at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
>  at 
> org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
> date: "m, 28 1970 19:23:50 GMT"
> at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
> at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
> at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
> at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
> at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
> at 
> org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
> ... 39 more
> {noformat}
> or like:
> {noformat}
> [ERROR] Failures:
> [ERROR]   TestOzoneRestClient.testDeleteKey
> Expected: exception with message a string containing "Lookup key failed, 
> error"
>  but: 

[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

patch 004 is the same as patch 003, just to trigger the jenkins.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.004.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627047#comment-16627047
 ] 

LiXin Ge commented on HDDS-448:
---

[~ajayydv] thanks for your further review, I'm done in patch 003.

BTW: this patch still have several lines conflict with HDDS-401 (actually 
depends on which one will be committed first),  I will rebase the later one 
then.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.003.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-25 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626883#comment-16626883
 ] 

LiXin Ge commented on HDDS-401:
---

Thanks [~ajayydv] for your comments.
{quote}SCMNodeManager#processDeadNode: Remove put op at L507 as that stat is 
already in map or replace it with putIfAbsent?
{quote}
I have removed the redundant put operation at L507 in patch 005.
{quote}Call to removeContainerReplica from DeadNodeHandler.onMessage should be 
wrapped in try catch.
{quote}
I'm pleasure to create a trace Jira and work on it, but I'm sorry that I didn't 
fully understand what you mean, call to {{removeContainerReplica}} from 
{{DeadNodeHandler.onMessage}} was in try catch already, we can see the log of 
stack just because {{DeadNodeHandler.onMessage}} catch and print the exception 
message threw by {{removeContainerReplica}}.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Patch Available  (was: Open)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Attachment: HDDS-401.005.patch

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-25 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Open  (was: Patch Available)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-21 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Attachment: HDDS-401.004.patch

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-21 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Patch Available  (was: Open)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-21 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623438#comment-16623438
 ] 

LiXin Ge commented on HDDS-401:
---

Thanks [~ajayydv] for reviewing this.
bq. testStatisticsUpdate: Shall we emit the actual SCMEvents.DEAD_NODE event 
for datanode1 (L181).
Done in the 004 patch
bq.Wondering if setting stats for dead node to 0 is better than removing its 
entry all together.
Agree with you, the node stats will be recovered when the dead node get back 
and register/report itself. Done in the 004 patch.

 

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-21 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Open  (was: Patch Available)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-21 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

Thanks [~ajayydv] for the reviewing.
{quote}NodeStateMap:getNodeStat Do we need the read lock, since it is 
concurrentHashMap?
NodeStateMap: setNodeStat/removeNodeStat: Do we need lock here?
{quote}
There were two simple reason to add lock here at the beginning: 1.Keep format 
consistent with the operation of {{stateMap}}. 2. There were two operations in 
each function(containsKey, and the get/remove), a lock will ensure the logic 
result of the two operations keeps the same.
But, to be honest, I prefer to get rid of the lock and simplify these 
functions. Done in the 002 patch.
{quote}NodeStateManager:L425 Lets propagate back the NodeNotFoundException.
{quote}
The interface of {{getNodeStat}} in {{NodeManager}} didn't throws an exception, 
it will results butterfly-effect if we throws the NodeNotFoundException: there 
will have many functions need to be modified. Since the callers of 
{{getNodeStat}} handles the null case well, can we let it stay the same or just 
another Jira if needed?
{quote}SCMNodeManager:L313 Shall we move put operation to L298?
{quote}
Maybe move it to L315 is better? If move to L298, the {{nodeStats}} can't 
updated based on the report information. Move it to L315 can guarantee the 
update operation no matter what the values of 
{{nodeReport.getStorageReportCount()}}.I move it to L315 in patch 002, please 
check if it match your requirement.

All the other comments have been fixed in patch 002.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-21 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.002.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-21 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-368) all tests in TestOzoneRestClient failed due to "zh_CN" OS language

2018-09-20 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621869#comment-16621869
 ] 

LiXin Ge edited comment on HDDS-368 at 9/20/18 11:44 AM:
-

Thanks [~szetszwo] for the testing. I have updated my maven version to 3.5.4 as 
your advice, but it didn't work, I can reproduce this issue every time by set 
locale to zh_CN. Maybe this is a problem mixed with locale and somethings else, 
such as OS or something.

FYI, Once the string transfered by HTTP have Chinese character(or character not 
in english letters and numbers), The "string".length() will shorter than the 
"string".getBytes().length, and then the data will be truncated by transfer and 
the error occured.
{noformat}
hadoop@hadoop:/home/glx/hadoop$ mvn -v
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 
2018-06-18T02:33:14+08:00)
Maven home: /home/maven/apache-maven-3.5.4
Java version: 1.8.0_111, vendor: Oracle Corporation, runtime: 
/home/hadoop/liuhua/hadoop/jdk1.8.0_111/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "4.4.0-116-generic", arch: "amd64", family: "unix"
{noformat}
{noformat}
[ERROR] 
testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient)  Time 
elapsed: 0.013 s  <<< ERROR!
com.fasterxml.jackson.core.io.JsonEOFException:
Unexpected end-of-input: expected close marker for Object (start marker at 
[Source: 
(String)"{"httpCode":400,"shortMessage":"badDate","resource":"ed709d90-601a-4039-b515-0a5c89b74f6d","message":"Unparseable
 date: \"�, 30 \u\b 1970 03:15:11 
GMT\"","requestID":"e9a66c77-6b8c-4f58-821b-c3e2fbd04758","hostName":"hadoop""; 
line: 1, column: 1])
 at [Source: 
(String)"{"httpCode":400,"shortMessage":"badDate","resource":"ed709d90-601a-4039-b515-0a5c89b74f6d","message":"Unparseable
 date: \"�, 30 \u\b 1970 03:15:11 
GMT\"","requestID":"e9a66c77-6b8c-4f58-821b-c3e2fbd04758","hostName":"hadoop""; 
line: 1, column: 459]
at 
com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:588)
at 
com.fasterxml.jackson.core.base.ParserBase._handleEOF(ParserBase.java:485)
at 
com.fasterxml.jackson.core.base.ParserBase._eofAsNextChar(ParserBase.java:497)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:2332)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:646)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:87)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:856)
at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:249)
at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:207)
{noformat}

{noformat}
[ERROR] Errors:
[ERROR]   TestOzoneRestClient.testAddBucketAcl:253 » JsonEOF Unexpected 
end-of-input: ex...
[ERROR]   TestOzoneRestClient.testCreateBucket:169 » JsonEOF Unexpected 
end-of-input: ex...
[ERROR]   TestOzoneRestClient.testCreateBucketWithAcls:215 » SocketTimeout Read 
timed ou...
[ERROR]   TestOzoneRestClient.testCreateBucketWithAllArgument:234 » 
SocketTimeout Read t...
[ERROR]   TestOzoneRestClient.testCreateBucketWithStorageType:196 » JsonEOF 
Unexpected e...
[ERROR]   TestOzoneRestClient.testCreateBucketWithVersioning:181 » JsonEOF 
Unexpected en...
[ERROR]   TestOzoneRestClient.testCreateVolume:87 » SocketTimeout Read timed out
[ERROR]   TestOzoneRestClient.testCreateVolumeWithOwner:98 » SocketTimeout Read 
timed ou...
[ERROR]   TestOzoneRestClient.testCreateVolumeWithQuota:110 » JsonEOF 
Unexpected end-of-...
[ERROR]   TestOzoneRestClient.testGetKeyDetails:421 » JsonEOF Unexpected 
end-of-input: e...
[ERROR]   TestOzoneRestClient.testPutKey:342 » JsonEOF Unexpected end-of-input: 
expected...
[ERROR]   TestOzoneRestClient.testRemoveBucketAcl:276 » JsonEOF Unexpected 
end-of-input:...
[ERROR]   TestOzoneRestClient.testRenameKey:394 » JsonEOF Unexpected 
end-of-input: expec...
[ERROR]   TestOzoneRestClient.testSetBucketStorageType:308 » SocketTimeout Read 
timed ou...
[ERROR]   TestOzoneRestClient.testSetBucketVersioning:293 » SocketTimeout Read 
timed out
[ERROR]   TestOzoneRestClient.testSetVolumeOwner:135 » SocketTimeout Read timed 
out
[ERROR]   TestOzoneRestClient.testSetVolumeQuota:145 » JsonEOF Unexpected 
end-of-input: ...
[ERROR]   TestOzoneRestClient.testVolumeAlreadyExist:121 » 

[jira] [Commented] (HDDS-368) all tests in TestOzoneRestClient failed due to "zh_CN" OS language

2018-09-20 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621869#comment-16621869
 ] 

LiXin Ge commented on HDDS-368:
---

Thanks [~szetszwo] for the testing. I have updated my maven version to 3.5.4 as 
your advice, but it didn't work, I can reproduce this issue every time by set 
locale to zh_CN. Maybe this is a problem mixed with locale and somethings else, 
such as OS or something.

FYI, Once the string transfered by HTTP have Chinese character(or character not 
in english letters and numbers), The "string".length() will shorter than the 
"string".getBytes().length, and then the data will be truncated by transfer and 
the error occured.
{noformat}
hadoop@hadoop:/home/glx/hadoop$ mvn -v
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 
2018-06-18T02:33:14+08:00)
Maven home: /home/maven/apache-maven-3.5.4
Java version: 1.8.0_111, vendor: Oracle Corporation, runtime: 
/home/hadoop/liuhua/hadoop/jdk1.8.0_111/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "4.4.0-116-generic", arch: "amd64", family: "unix"
{noformat}
{noformat}
[ERROR] 
testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient)  Time 
elapsed: 0.013 s  <<< ERROR!
com.fasterxml.jackson.core.io.JsonEOFException:
Unexpected end-of-input: expected close marker for Object (start marker at 
[Source: 
(String)"{"httpCode":400,"shortMessage":"badDate","resource":"ed709d90-601a-4039-b515-0a5c89b74f6d","message":"Unparseable
 date: \"�, 30 \u\b 1970 03:15:11 
GMT\"","requestID":"e9a66c77-6b8c-4f58-821b-c3e2fbd04758","hostName":"hadoop""; 
line: 1, column: 1])
 at [Source: 
(String)"{"httpCode":400,"shortMessage":"badDate","resource":"ed709d90-601a-4039-b515-0a5c89b74f6d","message":"Unparseable
 date: \"�, 30 \u\b 1970 03:15:11 
GMT\"","requestID":"e9a66c77-6b8c-4f58-821b-c3e2fbd04758","hostName":"hadoop""; 
line: 1, column: 459]
at 
com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:588)
at 
com.fasterxml.jackson.core.base.ParserBase._handleEOF(ParserBase.java:485)
at 
com.fasterxml.jackson.core.base.ParserBase._eofAsNextChar(ParserBase.java:497)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:2332)
at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:646)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:87)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:856)
at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:249)
at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:207)
{noformat}

> all tests in TestOzoneRestClient failed due to "zh_CN" OS language
> --
>
> Key: HDDS-368
> URL: https://issues.apache.org/jira/browse/HDDS-368
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: LiXin Ge
>Priority: Critical
>  Labels: alpha2
>
> OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)
> java version: 1.8.0_111
> mvn: Apache Maven 3.3.9
> Default locale: zh_CN, platform encoding: UTF-8
> Test command: mvn test -Dtest=TestOzoneRestClient -Phdds
>  
>  All the tests in TestOzoneRestClient failed in my local machine with 
> exception like below, does it mean anybody who have runtime environment like 
> me can't run the Ozone Rest test now?
> {noformat}
> [ERROR] 
> testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) 
> Time elapsed: 0.01 s <<< ERROR!
> java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
> Unparseable date: "m, 28 1970 19:23:50 GMT"
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> 

[jira] [Commented] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-20 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621691#comment-16621691
 ] 

LiXin Ge commented on HDDS-448:
---

Hi, [~ajayydv], patch 001 is rebased on the latest trunk, this patch have 
several lines conflict with HDDS-401 (actually depends on which one will be 
committed first),  I will rebase the later one then.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-20 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.001.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-20 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-20 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-20 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621674#comment-16621674
 ] 

LiXin Ge commented on HDDS-401:
---

Hi [~ajayydv], the patch has rebased upon the latest trunk now.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-20 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Patch Available  (was: Open)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-20 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Open  (was: Patch Available)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-20 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Attachment: HDDS-401.003.patch

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-20 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621560#comment-16621560
 ] 

LiXin Ge commented on HDDS-458:
---

Thanks [~elek] for reviewing, testing and committing this.

> numberofKeys is 0 for all containers even when keys are present
> ---
>
> Key: HDDS-458
> URL: https://issues.apache.org/jira/browse/HDDS-458
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-458.000.patch
>
>
>  
> numberofKeys field is 0 for all containers even when keys are present
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
> --count=40 --start=1 | grep numberOfKeys
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,{noformat}
>  
>  
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
> /fs-volume/fs-bucket/ | grep keyName
> 2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
>  "keyName" : "15GBFILE"
>  "keyName" : "15GBFILE1"
>  "keyName" : "1GB1"
>  "keyName" : "1GB10"
>  "keyName" : "1GB11"
>  "keyName" : "1GB12"
>  "keyName" : "1GB13"
>  "keyName" : "1GB14"
>  "keyName" : "1GB15"
>  "keyName" : "1GB2"
>  "keyName" : "1GB3"
>  "keyName" : "1GB4"
>  "keyName" : "1GB5"
>  "keyName" : "1GB6"
>  "keyName" : "1GB7"
>  "keyName" : "1GB8"
>  "keyName" : "1GB9"
>  "keyName" : "1GBsecond1"
>  "keyName" : "1GBsecond10"
>  "keyName" : "1GBsecond11"
>  "keyName" : "1GBsecond12"
>  "keyName" : "1GBsecond13"
>  "keyName" : "1GBsecond14"
>  "keyName" : "1GBsecond15"
>  "keyName" : "1GBsecond2"
>  "keyName" : "1GBsecond3"
>  "keyName" : "1GBsecond4"
>  "keyName" : "1GBsecond5"
>  "keyName" : "1GBsecond6"
>  "keyName" : "1GBsecond7"
>  "keyName" : "1GBsecond8"
>  "keyName" : "1GBsecond9"
>  "keyName" : "2GBFILE"
>  "keyName" : "2GBFILE2"
>  "keyName" : "50GBFILE2"
>  "keyName" : "passwd1"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-19 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620289#comment-16620289
 ] 

LiXin Ge edited comment on HDDS-458 at 9/19/18 8:37 AM:


The patch fixed the issue and works well in my local:
{noformat}
[root@EC130 bin]# ./ozone scmcli list --count=40 --start=1|grep numberOfKeys
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 0,
{noformat}
{noformat}
  [root@EC130 bin]# ./ozone sh key list /vol1/bucket/ | grep keyName
  "keyName" : "key1"
  "keyName" : "key2"
{noformat}
 
The key count is updated via container report which depends on the report 
interval, so there will be some delay to see the real-time {{numberOfKeys}} 
from scmcli, just wait for a moment.


was (Author: gelixin):
The patch fixed the issue and works well in my local:
{noformat}
[root@EC130 bin]# ./ozone scmcli list --count=40 --start=1|grep numberOfKeys
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 0,
{noformat}

{noformat}
  [root@EC130 bin]# ./ozone sh key list /vol1/bucket/ | grep keyName
  "keyName" : "key1"
  "keyName" : "key2"
{noformat}
 

> numberofKeys is 0 for all containers even when keys are present
> ---
>
> Key: HDDS-458
> URL: https://issues.apache.org/jira/browse/HDDS-458
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-458.000.patch
>
>
>  
> numberofKeys field is 0 for all containers even when keys are present
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
> --count=40 --start=1 | grep numberOfKeys
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,{noformat}
>  
>  
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
> /fs-volume/fs-bucket/ | grep keyName
> 2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
>  "keyName" : "15GBFILE"
>  "keyName" : "15GBFILE1"
>  "keyName" : "1GB1"
>  "keyName" : "1GB10"
>  "keyName" : "1GB11"
>  "keyName" : "1GB12"
>  "keyName" : "1GB13"
>  "keyName" : "1GB14"
>  "keyName" : "1GB15"
>  "keyName" : "1GB2"
>  "keyName" : "1GB3"
>  "keyName" : "1GB4"
>  "keyName" : "1GB5"
>  "keyName" : "1GB6"
>  "keyName" : "1GB7"
>  "keyName" : "1GB8"
>  "keyName" : "1GB9"
>  "keyName" : "1GBsecond1"
>  "keyName" : "1GBsecond10"
>  "keyName" : "1GBsecond11"
>  "keyName" : "1GBsecond12"
>  "keyName" : "1GBsecond13"
>  "keyName" : "1GBsecond14"
>  "keyName" : "1GBsecond15"
>  "keyName" : "1GBsecond2"
>  "keyName" : "1GBsecond3"
>  "keyName" : "1GBsecond4"
>  "keyName" : "1GBsecond5"
>  "keyName" : "1GBsecond6"
>  "keyName" : "1GBsecond7"
>  "keyName" : "1GBsecond8"
>  "keyName" : "1GBsecond9"
>  "keyName" : "2GBFILE"
>  "keyName" : "2GBFILE2"
>  "keyName" : "50GBFILE2"
>  "keyName" : "passwd1"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



***UNCHECKED*** [jira] [Commented] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-19 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620289#comment-16620289
 ] 

LiXin Ge commented on HDDS-458:
---

The patch fixed the issue and works well in my local:
{noformat}
[root@EC130 bin]# ./ozone scmcli list --count=40 --start=1|grep numberOfKeys
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 1,
  "numberOfKeys" : 0,
  "numberOfKeys" : 0,
{noformat}

{noformat}
  [root@EC130 bin]# ./ozone sh key list /vol1/bucket/ | grep keyName
  "keyName" : "key1"
  "keyName" : "key2"
{noformat}
 

> numberofKeys is 0 for all containers even when keys are present
> ---
>
> Key: HDDS-458
> URL: https://issues.apache.org/jira/browse/HDDS-458
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-458.000.patch
>
>
>  
> numberofKeys field is 0 for all containers even when keys are present
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
> --count=40 --start=1 | grep numberOfKeys
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,{noformat}
>  
>  
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
> /fs-volume/fs-bucket/ | grep keyName
> 2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
>  "keyName" : "15GBFILE"
>  "keyName" : "15GBFILE1"
>  "keyName" : "1GB1"
>  "keyName" : "1GB10"
>  "keyName" : "1GB11"
>  "keyName" : "1GB12"
>  "keyName" : "1GB13"
>  "keyName" : "1GB14"
>  "keyName" : "1GB15"
>  "keyName" : "1GB2"
>  "keyName" : "1GB3"
>  "keyName" : "1GB4"
>  "keyName" : "1GB5"
>  "keyName" : "1GB6"
>  "keyName" : "1GB7"
>  "keyName" : "1GB8"
>  "keyName" : "1GB9"
>  "keyName" : "1GBsecond1"
>  "keyName" : "1GBsecond10"
>  "keyName" : "1GBsecond11"
>  "keyName" : "1GBsecond12"
>  "keyName" : "1GBsecond13"
>  "keyName" : "1GBsecond14"
>  "keyName" : "1GBsecond15"
>  "keyName" : "1GBsecond2"
>  "keyName" : "1GBsecond3"
>  "keyName" : "1GBsecond4"
>  "keyName" : "1GBsecond5"
>  "keyName" : "1GBsecond6"
>  "keyName" : "1GBsecond7"
>  "keyName" : "1GBsecond8"
>  "keyName" : "1GBsecond9"
>  "keyName" : "2GBFILE"
>  "keyName" : "2GBFILE2"
>  "keyName" : "50GBFILE2"
>  "keyName" : "passwd1"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-19 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-458:
--
Status: Patch Available  (was: Open)

> numberofKeys is 0 for all containers even when keys are present
> ---
>
> Key: HDDS-458
> URL: https://issues.apache.org/jira/browse/HDDS-458
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-458.000.patch
>
>
>  
> numberofKeys field is 0 for all containers even when keys are present
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
> --count=40 --start=1 | grep numberOfKeys
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,{noformat}
>  
>  
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
> /fs-volume/fs-bucket/ | grep keyName
> 2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
>  "keyName" : "15GBFILE"
>  "keyName" : "15GBFILE1"
>  "keyName" : "1GB1"
>  "keyName" : "1GB10"
>  "keyName" : "1GB11"
>  "keyName" : "1GB12"
>  "keyName" : "1GB13"
>  "keyName" : "1GB14"
>  "keyName" : "1GB15"
>  "keyName" : "1GB2"
>  "keyName" : "1GB3"
>  "keyName" : "1GB4"
>  "keyName" : "1GB5"
>  "keyName" : "1GB6"
>  "keyName" : "1GB7"
>  "keyName" : "1GB8"
>  "keyName" : "1GB9"
>  "keyName" : "1GBsecond1"
>  "keyName" : "1GBsecond10"
>  "keyName" : "1GBsecond11"
>  "keyName" : "1GBsecond12"
>  "keyName" : "1GBsecond13"
>  "keyName" : "1GBsecond14"
>  "keyName" : "1GBsecond15"
>  "keyName" : "1GBsecond2"
>  "keyName" : "1GBsecond3"
>  "keyName" : "1GBsecond4"
>  "keyName" : "1GBsecond5"
>  "keyName" : "1GBsecond6"
>  "keyName" : "1GBsecond7"
>  "keyName" : "1GBsecond8"
>  "keyName" : "1GBsecond9"
>  "keyName" : "2GBFILE"
>  "keyName" : "2GBFILE2"
>  "keyName" : "50GBFILE2"
>  "keyName" : "passwd1"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-19 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-458:
--
Attachment: HDDS-458.000.patch

> numberofKeys is 0 for all containers even when keys are present
> ---
>
> Key: HDDS-458
> URL: https://issues.apache.org/jira/browse/HDDS-458
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-458.000.patch
>
>
>  
> numberofKeys field is 0 for all containers even when keys are present
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
> --count=40 --start=1 | grep numberOfKeys
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,{noformat}
>  
>  
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
> /fs-volume/fs-bucket/ | grep keyName
> 2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
>  "keyName" : "15GBFILE"
>  "keyName" : "15GBFILE1"
>  "keyName" : "1GB1"
>  "keyName" : "1GB10"
>  "keyName" : "1GB11"
>  "keyName" : "1GB12"
>  "keyName" : "1GB13"
>  "keyName" : "1GB14"
>  "keyName" : "1GB15"
>  "keyName" : "1GB2"
>  "keyName" : "1GB3"
>  "keyName" : "1GB4"
>  "keyName" : "1GB5"
>  "keyName" : "1GB6"
>  "keyName" : "1GB7"
>  "keyName" : "1GB8"
>  "keyName" : "1GB9"
>  "keyName" : "1GBsecond1"
>  "keyName" : "1GBsecond10"
>  "keyName" : "1GBsecond11"
>  "keyName" : "1GBsecond12"
>  "keyName" : "1GBsecond13"
>  "keyName" : "1GBsecond14"
>  "keyName" : "1GBsecond15"
>  "keyName" : "1GBsecond2"
>  "keyName" : "1GBsecond3"
>  "keyName" : "1GBsecond4"
>  "keyName" : "1GBsecond5"
>  "keyName" : "1GBsecond6"
>  "keyName" : "1GBsecond7"
>  "keyName" : "1GBsecond8"
>  "keyName" : "1GBsecond9"
>  "keyName" : "2GBFILE"
>  "keyName" : "2GBFILE2"
>  "keyName" : "50GBFILE2"
>  "keyName" : "passwd1"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-16 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge reassigned HDDS-458:
-

Assignee: LiXin Ge

> numberofKeys is 0 for all containers even when keys are present
> ---
>
> Key: HDDS-458
> URL: https://issues.apache.org/jira/browse/HDDS-458
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: newbie
>
>  
> numberofKeys field is 0 for all containers even when keys are present
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
> --count=40 --start=1 | grep numberOfKeys
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,
>  "numberOfKeys" : 0,{noformat}
>  
>  
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
> /fs-volume/fs-bucket/ | grep keyName
> 2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
>  "keyName" : "15GBFILE"
>  "keyName" : "15GBFILE1"
>  "keyName" : "1GB1"
>  "keyName" : "1GB10"
>  "keyName" : "1GB11"
>  "keyName" : "1GB12"
>  "keyName" : "1GB13"
>  "keyName" : "1GB14"
>  "keyName" : "1GB15"
>  "keyName" : "1GB2"
>  "keyName" : "1GB3"
>  "keyName" : "1GB4"
>  "keyName" : "1GB5"
>  "keyName" : "1GB6"
>  "keyName" : "1GB7"
>  "keyName" : "1GB8"
>  "keyName" : "1GB9"
>  "keyName" : "1GBsecond1"
>  "keyName" : "1GBsecond10"
>  "keyName" : "1GBsecond11"
>  "keyName" : "1GBsecond12"
>  "keyName" : "1GBsecond13"
>  "keyName" : "1GBsecond14"
>  "keyName" : "1GBsecond15"
>  "keyName" : "1GBsecond2"
>  "keyName" : "1GBsecond3"
>  "keyName" : "1GBsecond4"
>  "keyName" : "1GBsecond5"
>  "keyName" : "1GBsecond6"
>  "keyName" : "1GBsecond7"
>  "keyName" : "1GBsecond8"
>  "keyName" : "1GBsecond9"
>  "keyName" : "2GBFILE"
>  "keyName" : "2GBFILE2"
>  "keyName" : "50GBFILE2"
>  "keyName" : "passwd1"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-13 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614401#comment-16614401
 ] 

LiXin Ge commented on HDDS-369:
---

Hi [~elek], I have created HDDS-449 to add the null check as we discussed, 
please help to review it when you have time, thanks.

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614395#comment-16614395
 ] 

LiXin Ge commented on HDDS-449:
---

There is no need to add test for this easy fix, and the failed test passed in 
my local machine, it's not related.

> Add a NULL check to protect DeadNodeHandler#onMessage
> -
>
> Key: HDDS-449
> URL: https://issues.apache.org/jira/browse/HDDS-449
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: EasyFix
> Attachments: HDDS-449.000.patch
>
>
> Add a NULL check to protect the situation below(may only happened in the case 
> of unit test):
>  1.A new datanode register to SCM.
>  2. There is no container allocated in the new datanode temporarily.
>  3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
>  4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
> {{node2ContainerMap}} and {{containers}} will be {{NULL}}
>  5.NullPointerException will be throwen in the following iterate of 
> {{containers}} like:
> {noformat}
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 
> s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
> [ERROR] 
> testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  
> Time elapsed: 0.33 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
> at 
> org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-449:
--
Labels: EasyFix  (was: )

> Add a NULL check to protect DeadNodeHandler#onMessage
> -
>
> Key: HDDS-449
> URL: https://issues.apache.org/jira/browse/HDDS-449
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: EasyFix
> Attachments: HDDS-449.000.patch
>
>
> Add a NULL check to protect the situation below(may only happened in the case 
> of unit test):
>  1.A new datanode register to SCM.
>  2. There is no container allocated in the new datanode temporarily.
>  3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
>  4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
> {{node2ContainerMap}} and {{containers}} will be {{NULL}}
>  5.NullPointerException will be throwen in the following iterate of 
> {{containers}} like:
> {noformat}
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 
> s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
> [ERROR] 
> testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  
> Time elapsed: 0.33 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
> at 
> org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-449:
--
Status: Patch Available  (was: Open)

> Add a NULL check to protect DeadNodeHandler#onMessage
> -
>
> Key: HDDS-449
> URL: https://issues.apache.org/jira/browse/HDDS-449
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
> Attachments: HDDS-449.000.patch
>
>
> Add a NULL check to protect the situation below(may only happened in the case 
> of unit test):
>  1.A new datanode register to SCM.
>  2. There is no container allocated in the new datanode temporarily.
>  3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
>  4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
> {{node2ContainerMap}} and {{containers}} will be {{NULL}}
>  5.NullPointerException will be throwen in the following iterate of 
> {{containers}} like:
> {noformat}
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 
> s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
> [ERROR] 
> testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  
> Time elapsed: 0.33 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
> at 
> org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-449:
--
Attachment: HDDS-449.000.patch

> Add a NULL check to protect DeadNodeHandler#onMessage
> -
>
> Key: HDDS-449
> URL: https://issues.apache.org/jira/browse/HDDS-449
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
> Attachments: HDDS-449.000.patch
>
>
> Add a NULL check to protect the situation below(may only happened in the case 
> of unit test):
>  1.A new datanode register to SCM.
>  2. There is no container allocated in the new datanode temporarily.
>  3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
>  4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
> {{node2ContainerMap}} and {{containers}} will be {{NULL}}
>  5.NullPointerException will be throwen in the following iterate of 
> {{containers}} like:
> {noformat}
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 
> s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
> [ERROR] 
> testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  
> Time elapsed: 0.33 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
> at 
> org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-449:
-

 Summary: Add a NULL check to protect DeadNodeHandler#onMessage
 Key: HDDS-449
 URL: https://issues.apache.org/jira/browse/HDDS-449
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: LiXin Ge
Assignee: LiXin Ge


Add a NULL check to protect the situation below(may only happened in the case 
of unit test):
 1.A new datanode register to SCM.
 2. There is no container allocated in the new datanode temporarily.
 3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
 4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
{{node2ContainerMap}} and {{containers}} will be {{NULL}}
 5.NullPointerException will be throwen in the following iterate of 
{{containers}} like:
{noformat}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 s 
<<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
[ERROR] 
testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  Time 
elapsed: 0.33 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
at 
org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-13 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.000.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-13 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-13 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge reassigned HDDS-448:
-

Assignee: LiXin Ge

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-13 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-448:
-

 Summary: Move NodeStat to NodeStatemanager from SCMNodeManager.
 Key: HDDS-448
 URL: https://issues.apache.org/jira/browse/HDDS-448
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: LiXin Ge


This issue try to make the SCMNodeManager clear and clean, as the stat 
information should be kept by NodeStatemanager (NodeStateMap). It's also 
described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-09-13 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613075#comment-16613075
 ] 

LiXin Ge commented on HDDS-434:
---

[~anu] Thanks for mentioning me, I'm willing to do this.

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-12 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612943#comment-16612943
 ] 

LiXin Ge commented on HDDS-369:
---

Thanks for the commets [~elek].
Actually I find this doubts when I add unit test for HDDS-401(you can take a 
look at the 
[patch|https://issues.apache.org/jira/secure/attachment/12939217/HDDS-401.002.patch]
 if interested). I find that I must call {{registerReplicas}} on the target 
datanode(which not related to my test case), or I will get an exception like 
that:
{noformat}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 s 
<<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
[ERROR] 
testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  Time 
elapsed: 0.33 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
at 
org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{noformat}
The exception is not threw by {{node2ContainerMap}} but  
{{DeadNodeHandler#onMessage}}. even so, if it's only impact the unit test as 
you said, we can just let it be.

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610198#comment-16610198
 ] 

LiXin Ge edited comment on HDDS-401 at 9/11/18 7:44 AM:


patch 002 removes the stats of dead node from {{nodeStats}} while updating the 
SCM stats, the stats will be added to {{nodeStats}} again when the dead node 
come alive and get registered. Checkstyle warnings in patch 001 are not related.
 Hi [~hanishakoneru], [~ajayydv], could you help to review this when you have 
chances, thanks.


was (Author: gelixin):
patch 002 removes the stats of dead node from {{nodeStats}} while updating the 
SCM stats, the stats will be added to {{nodeStats}} again when the dead node 
come alive and get registered. Checkstyle warnings in patch 001 are not related.
 Hi [~hanishakoneru] [~ajayk5], could you help to review this when you have 
chance, thanks.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610198#comment-16610198
 ] 

LiXin Ge edited comment on HDDS-401 at 9/11/18 7:41 AM:


patch 002 removes the stats of dead node from {{nodeStats}} while updating the 
SCM stats, the stats will be added to {{nodeStats}} again when the dead node 
come alive and get registered. Checkstyle warnings in patch 001 are not related.
 Hi [~hanishakoneru] [~ajayk5], could you help to review this when you have 
chance, thanks.


was (Author: gelixin):
patch 002 removes the stats of dead node from {{nodeStats}} while updating the 
SCM stats, the stats will be added to nodeStats when the dead node come 
alive and get registered. Checkstyle warnings in patch 001 are not related.
 Hi [~hanishakoneru] [~ajayk5], could you help to review this when you have 
chance, thanks.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610198#comment-16610198
 ] 

LiXin Ge edited comment on HDDS-401 at 9/11/18 7:40 AM:


patch 002 removes the stats of dead node from {{nodeStats}} while updating the 
SCM stats, the stats will be added to nodeStats when the dead node come 
alive and get registered. Checkstyle warnings in patch 001 are not related.
 Hi [~hanishakoneru] [~ajayk5], could you help to review this when you have 
chance, thanks.


was (Author: gelixin):
patch 002 removes the stats of dead node from {{nodeStats}} while update the 
SCM stats, the checkstyle  warnings are not related.
Hi [~hanishakoneru] [~ajayk5], could you help to review this when you have 
chance, thanks.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610198#comment-16610198
 ] 

LiXin Ge commented on HDDS-401:
---

patch 002 removes the stats of dead node from {{nodeStats}} while update the 
SCM stats, the checkstyle  warnings are not related.
Hi [~hanishakoneru] [~ajayk5], could you help to review this when you have 
chance, thanks.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Patch Available  (was: Open)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Attachment: HDDS-401.002.patch

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-11 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Open  (was: Patch Available)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-10 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609112#comment-16609112
 ] 

LiXin Ge commented on HDDS-401:
---

Conflict against HDDS-400, patch 001 is rebased.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-10 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Patch Available  (was: Open)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-10 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Attachment: HDDS-401.001.patch

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-10 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Open  (was: Patch Available)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-10 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Status: Patch Available  (was: Open)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-10 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-401:
--
Attachment: HDDS-401.000.patch

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-08 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607934#comment-16607934
 ] 

LiXin Ge commented on HDDS-369:
---

Hi [~elek], sorry for the late review, is there have chance to introduce a 
NullPointerException in {{DeadNodeHandler#onMessage}} in the situation below?
1. A new datanode register to SCM.
2. There is no container allocated in the new datanode temporarily.
3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
4.In function {{onMessage}}, there will get nothing in {{node2ContainerMap}} 
and {{containers}} will be {{NULL}}
5.NullPointerException will be throwen in the following iterate of 
{{containers}}.

Shall we iterate the {{containers}} only when it is not null? I can create a 
new Jira to fix this if needed.

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-09-05 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604057#comment-16604057
 ] 

LiXin Ge commented on HDDS-336:
---

[~anu] Much appreciated! I don't know how to use the tag you mentioned, as no 
difference was found when I browse this JIRA system now. Could you please give 
me a hint?

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-401) Update storage statistics on dead node

2018-09-05 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge reassigned HDDS-401:
-

Assignee: LiXin Ge

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-09-03 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602540#comment-16602540
 ] 

LiXin Ge commented on HDDS-336:
---

Thanks [~elek] for committing this.  I'd be happy to participate in Ozone 
improvment, please feel free to let me know if you have any demand.

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599504#comment-16599504
 ] 

LiXin Ge commented on HDDS-336:
---

Thanks [~elek] for your reviews and information. I have fixed the 
checkstyle/javadoc issues in 005 patch.

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Status: Patch Available  (was: Open)

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Attachment: HDDS-336.005.patch

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Status: Open  (was: Patch Available)

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Status: Patch Available  (was: Open)

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Attachment: HDDS-336.004.patch

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Status: Open  (was: Patch Available)

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-08-29 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596041#comment-16596041
 ] 

LiXin Ge commented on HDDS-336:
---

Failed junit tests passed in my local machine. I will fix the checkstyle 
warnnings when jenkins resumed, the checkstyle report link can't be opened all 
the day.

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   >