[jira] [Commented] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005536#comment-16005536
 ] 

Chen Liang commented on HDFS-11788:
---

Thanks [~anu] for the comment! The failed tests and findbug warning are 
unrelated, will commit this shortly.

> Ozone : add DEBUG CLI support for nodepool db file
> --
>
> Key: HDFS-11788
> URL: https://issues.apache.org/jira/browse/HDFS-11788
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11788-HDFS-7240.001.patch, 
> HDFS-11788-HDFS-7240.002.patch, HDFS-11788-HDFS-7240.003.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> nodepool.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11802) Ozone : add DEBUG CLI support for open container db file

2017-05-10 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11802:
-

 Summary: Ozone : add DEBUG CLI support for open container db file
 Key: HDFS-11802
 URL: https://issues.apache.org/jira/browse/HDFS-11802
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Chen Liang
Assignee: Chen Liang


This is a following-up of HDFS-11698. This JIRA adds the converting of 
openContainer.db levelDB file.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11768) Ozone: KSM: Create Key Space manager service

2017-05-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005320#comment-16005320
 ] 

Chen Liang commented on HDFS-11768:
---

I have a question about how will the access check work.

Take {{deleteVolume}} for example, there is {{ACCESS_DENIED}} in .proto for 
return status of deleteVolume. But the API {{void deleteVolume(String volume) 
throws IOException;}} does not seem to have any information to do with the 
permission of the user making this call. So when this call gets to the server, 
how will the server know if the access should be granted or denied? 

> Ozone: KSM: Create Key Space manager service
> 
>
> Key: HDFS-11768
> URL: https://issues.apache.org/jira/browse/HDFS-11768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11768-HDFS-7240.001.patch, ozone-key-space.pdf
>
>
> KSM is the namespace manager for Ozone. KSM relies on SCM to achieve block 
> functions. Ozone handler -- The rest protocol frontend talks to KSM and SCM 
> to get datanode addresses.
> This JIRA will add the service as well as add the protobuf definitions needed 
> to work with KSM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11764) NPE when the GroupMappingServiceProvider has no group

2017-05-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005227#comment-16005227
 ] 

Chen Liang commented on HDFS-11764:
---

Hi [~runlinzhang], could you please elaborate a little bit about how you ran 
into this? Looks like none of the subclasses of GroupMappingServiceProvider 
would ever return a null for getGroups(). And from which branch were you 
getting this? The line numbers in the image file do not seem to match any of 
trunk, branch-2.7 or branch-2.7.2

> NPE when the GroupMappingServiceProvider has no group 
> --
>
> Key: HDFS-11764
> URL: https://issues.apache.org/jira/browse/HDFS-11764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
>Reporter: runlinzhang
>Priority: Critical
> Fix For: 2.7.2
>
> Attachments: image.png
>
>
> The following code can throw NPE if GroupMappingServiceProvider.getGroups() 
> returns null.
> public List load(String user) throws Exception {
>   List groups = fetchGroupList(user);
>   if (groups.isEmpty()) {
> if (isNegativeCacheEnabled()) {
>   negativeCache.add(user);
> }
> // We throw here to prevent Cache from retaining an empty group
> throw noGroupsForUser(user);
>   }
>   return groups;
> }
> eg:image



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11802) Ozone : add DEBUG CLI support for open container db file

2017-05-10 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11802:
--
Attachment: HDFS-11802-HDFS-7240.001.patch

Post v001 patch, will make it patch available status after HDFS-11788 gets in.

> Ozone : add DEBUG CLI support for open container db file
> 
>
> Key: HDFS-11802
> URL: https://issues.apache.org/jira/browse/HDFS-11802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11802-HDFS-7240.001.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> openContainer.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11907:
--
Attachment: HDFS-11907.006.patch

Thanks [~arpitagarwal] for pointing out offline that, 
{{checkAvailableResources}} has more than one callers. Post v006 patch to move 
the measurement into this method to capture all the callers.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-09 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045096#comment-16045096
 ] 

Chen Liang edited comment on HDFS-11907 at 6/9/17 9:59 PM:
---

Thanks [~arpitagarwal] for pointing out offline that, 
{{checkAvailableResources}} has more than one callers. Post v006 patch to move 
the measurement into this method to capture all the code paths.


was (Author: vagarychen):
Thanks [~arpitagarwal] for pointing out offline that, 
{{checkAvailableResources}} has more than one callers. Post v006 patch to move 
the measurement into this method to capture all the callers.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key

2017-06-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11939:
--
Attachment: HDFS-11939-HDFS-7240.003.patch

fixing a bug in v003 patch.

> Ozone : add read/write random access to Chunks of a key
> ---
>
> Key: HDFS-11939
> URL: https://issues.apache.org/jira/browse/HDFS-11939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11939-HDFS-7240.001.patch, 
> HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch
>
>
> In Ozone, the value of a key is a sequence of container chunks. Currently, 
> the only way to read/write the chunks is by using ChunkInputStream and 
> ChunkOutputStream. However, by the nature of streams, these classes are 
> currently implemented to only allow sequential read/write. 
> Ideally we would like to support random access of the chunks. For example, we 
> want to be able to seek to a specific offset and read/write some data. This 
> will be critical for key range read/write feature, and potentially important 
> for supporting parallel read/write.
> This JIRA tracks adding support by implementing FileChannel class on top 
> Chunks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11907:
--
Attachment: HDFS-11907.005.patch

Thanks [~andrew.wang] for the comments. Based on Andrew's comments, instead of 
making the cache the value, post v005 patch to add a metric to keep track of 
available resource check time.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-09 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044987#comment-16044987
 ] 

Chen Liang edited comment on HDFS-11907 at 6/9/17 8:37 PM:
---

Thanks [~andrew.wang] for the comments. Based on Andrew's comments, instead of 
making the change to cache the value, post v005 patch to add a metric to keep 
track of available resource check time.


was (Author: vagarychen):
Thanks [~andrew.wang] for the comments. Based on Andrew's comments, instead of 
making the cache the value, post v005 patch to add a metric to keep track of 
available resource check time.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key

2017-06-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11939:
--
Attachment: HDFS-11939-HDFS-7240.002.patch

Post v002 patch to fix checkstyle warnining.

> Ozone : add read/write random access to Chunks of a key
> ---
>
> Key: HDFS-11939
> URL: https://issues.apache.org/jira/browse/HDFS-11939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11939-HDFS-7240.001.patch, 
> HDFS-11939-HDFS-7240.002.patch
>
>
> In Ozone, the value of a key is a sequence of container chunks. Currently, 
> the only way to read/write the chunks is by using ChunkInputStream and 
> ChunkOutputStream. However, by the nature of streams, these classes are 
> currently implemented to only allow sequential read/write. 
> Ideally we would like to support random access of the chunks. For example, we 
> want to be able to seek to a specific offset and read/write some data. This 
> will be critical for key range read/write feature, and potentially important 
> for supporting parallel read/write.
> This JIRA tracks adding support by implementing FileChannel class on top 
> Chunks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046809#comment-16046809
 ] 

Chen Liang commented on HDFS-11907:
---

Whoops... thanks [~arpitagarwal]!

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11907:
--
Attachment: HDFS-11907.007.patch

Thansk [~arpitagarwal] for the catch! addressed in v007 patch.

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047093#comment-16047093
 ] 

Chen Liang commented on HDFS-11907:
---

The failed test are unrelated.

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046854#comment-16046854
 ] 

Chen Liang commented on HDFS-11967:
---

v001 patch LGTM, thanks [~arpitagarwal] for the analysis and the patch!

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key

2017-06-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11939:
--
Attachment: HDFS-11939-HDFS-7240.004.patch

Had an offline discussion with [~anu], post v004 patch to address the following:
- throw exception when position to location > total size
- for read, if dst buffer has remaining length 0 already, return 0 immediately
- for read path, order the chunks by their offsets first, then when locating a 
chunk, do binary instead of iterating.
- earlier, the "undefined" part of data can be anything random. changed to zero 
bytes.

> Ozone : add read/write random access to Chunks of a key
> ---
>
> Key: HDFS-11939
> URL: https://issues.apache.org/jira/browse/HDFS-11939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11939-HDFS-7240.001.patch, 
> HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch, 
> HDFS-11939-HDFS-7240.004.patch
>
>
> In Ozone, the value of a key is a sequence of container chunks. Currently, 
> the only way to read/write the chunks is by using ChunkInputStream and 
> ChunkOutputStream. However, by the nature of streams, these classes are 
> currently implemented to only allow sequential read/write. 
> Ideally we would like to support random access of the chunks. For example, we 
> want to be able to seek to a specific offset and read/write some data. This 
> will be critical for key range read/write feature, and potentially important 
> for supporting parallel read/write.
> This JIRA tracks adding support by implementing FileChannel class on top 
> Chunks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056630#comment-16056630
 ] 

Chen Liang edited comment on HDFS-12002 at 6/20/17 11:06 PM:
-

Post initial patch.

Thanks [~elek] for the comments! Added the change you suggested. Also, I want 
to take back the #4 change I mentioned in the description, as ozone is still 
under development, it seems more beneficial to print the exception trace 
message rather than hiding them for the time being.


was (Author: vagarychen):
Post initial patch.

Thanks [~elek] for the comments! Added the change you suggested. Also, I want 
to take back the #4 change I mentioned in the description, as ozone is still 
under development, it seems more beneficial to print the error message rather 
than hiding them for the time being.

> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
> Attachments: HDFS-12002-HDFS-7240.001.patch
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info
> 4. when passing in invalid argument e.g. -info on a non-existing container, 
> an exception will be displayed. We probably should not scare the users, and 
> only display just one error message. And move the exception display to debug 
> mode display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12002:
--
Status: Patch Available  (was: Open)

> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
> Attachments: HDFS-12002-HDFS-7240.001.patch
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info
> 4. when passing in invalid argument e.g. -info on a non-existing container, 
> an exception will be displayed. We probably should not scare the users, and 
> only display just one error message. And move the exception display to debug 
> mode display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12002:
--
Attachment: HDFS-12002-HDFS-7240.001.patch

Post initial patch.

Thanks [~elek] for the comments! Added the change you suggested. Also, I want 
to take back the #4 change I mentioned in the description, as ozone is still 
under development, it seems more beneficial to print the error message rather 
than hiding them for the time being.

> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
> Attachments: HDFS-12002-HDFS-7240.001.patch
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info
> 4. when passing in invalid argument e.g. -info on a non-existing container, 
> an exception will be displayed. We probably should not scare the users, and 
> only display just one error message. And move the exception display to debug 
> mode display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056630#comment-16056630
 ] 

Chen Liang edited comment on HDFS-12002 at 6/20/17 11:15 PM:
-

Post initial patch.

Thanks [~elek] for the comments! Added the change you suggested. 


was (Author: vagarychen):
Post initial patch.

Thanks [~elek] for the comments! Added the change you suggested. Also, I want 
to take back the #4 change I mentioned in the description, as ozone is still 
under development, it seems more beneficial to print the exception trace 
message rather than hiding them for the time being.

> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
> Attachments: HDFS-12002-HDFS-7240.001.patch
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12002:
--
Description: 
Currently there are a few minor issues with the SCM CLI:

1. some commands do not use -c option to take container name. an issue with 
this is that arguments need to be in a certain order to be correctly parsed, 
e.g.:
{{./bin/hdfs scm -container -del c0 -f}} works, but
{{./bin/hdfs scm -container -del -f c0}} will not
A more important thing is that, since -del requires the following argument 
being container name, if someone types {{./bin/hdfs scm -container -del -help}} 
it will be an error, while we probably want to display a help message instead.

2.some subcommands are not displaying the errors in the best way it could be, 
e.g.:
{{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
So cli complains 
{code}
Missing argument for option: del
Unrecognized options:[-container, -del]
usage: hdfs scm  []
where  can be one of the following
 -container   Container related options
{code}
but this does not really show that it is container name it is missing

3. probably better to rename -del to -delete to be consistent with other 
commands like -create and -info

  was:
Currently there are a few minor issues with the SCM CLI:

1. some commands do not use -c option to take container name. an issue with 
this is that arguments need to be in a certain order to be correctly parsed, 
e.g.:
{{./bin/hdfs scm -container -del c0 -f}} works, but
{{./bin/hdfs scm -container -del -f c0}} will not
A more important thing is that, since -del requires the following argument 
being container name, if someone types {{./bin/hdfs scm -container -del -help}} 
it will be an error, while we probably want to display a help message instead.

2.some subcommands are not displaying the errors in the best way it could be, 
e.g.:
{{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
So cli complains 
{code}
Missing argument for option: del
Unrecognized options:[-container, -del]
usage: hdfs scm  []
where  can be one of the following
 -container   Container related options
{code}
but this does not really show that it is container name it is missing

3. probably better to rename -del to -delete to be consistent with other 
commands like -create and -info

4. when passing in invalid argument e.g. -info on a non-existing container, an 
exception will be displayed. We probably should not scare the users, and only 
display just one error message. And move the exception display to debug mode 
display or something.


> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
> Attachments: HDFS-12002-HDFS-7240.001.patch
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056668#comment-16056668
 ] 

Chen Liang commented on HDFS-11998:
---

{{TestAvailableSpaceBlockPlacementPolicy}} fail seems related, 
{{TestReplicationPolicyWithNodeGroup}} also fails because node group tries to 
cast new class to old ones and failed, still investigating these two failures. 
The other fails all passed in my local run so should be unrelated.

> Enable DFSNetworkTopology as default
> 
>
> Key: HDFS-11998
> URL: https://issues.apache.org/jira/browse/HDFS-11998
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11998.001.patch
>
>
> HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
> uses {{NetworkTopology}} as default. 
> Given the stress testing in HDFS-11923 which shows the correctness of 
> DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
> DFSNetworkTopology can outperform NetworkTopology. I think we are at the 
> point where I can and should enable DFSNetworkTopology as default.
> Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056705#comment-16056705
 ] 

Chen Liang commented on HDFS-11998:
---

{{TestAvailableSpaceBlockPlacementPolicy}} fails because 
{{AvailableSpaceBlockPlacementPolicy}} only rewrites chooseDataNode(...) for 
it's optimization purpose. However the caller of this function in 
{{BlockPlacementPolicyDefault#chooseRandom()}} has changed to call 
chooseDataNode(…, type) on {{DFSNetworkTopology}}, this bypasses the change 
that AvailableSpaceBlockPlacementPolicy has made in chooseDataNode(...), thus 
invalidated its change. Need to also overwrite chooseDataNode(..., type)  in 
AvailableSpaceBlockPlacementPolicy. I've made the change locally which seems to 
have solved the test failure. Will post next patch after the other test failure 
gets resolved.

> Enable DFSNetworkTopology as default
> 
>
> Key: HDFS-11998
> URL: https://issues.apache.org/jira/browse/HDFS-11998
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11998.001.patch
>
>
> HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
> uses {{NetworkTopology}} as default. 
> Given the stress testing in HDFS-11923 which shows the correctness of 
> DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
> DFSNetworkTopology can outperform NetworkTopology. I think we are at the 
> point where I can and should enable DFSNetworkTopology as default.
> Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12002:
-

 Summary: Ozone : SCM cli misc fixes/improvements
 Key: HDFS-12002
 URL: https://issues.apache.org/jira/browse/HDFS-12002
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang
 Fix For: ozone


Currently there are a few minor issues with the SCM CLI:

1. some commands do not use -c option to take container name. an issue with 
this is that arguments need to be in a certain order to be correctly parsed, 
e.g.:
{{./bin/hdfs scm -container -del c0 -f}} works, but
{{./bin/hdfs scm -container -del -f c0}} will not

2.some subcommands are not displaying the errors in the best way it could be, 
e.g.:
{{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
So cli complains 
{code}
Missing argument for option: del
Unrecognized options:[-container, -del]
usage: hdfs scm  []
where  can be one of the following
 -container   Container related options
{code}
but this does not really show that it is container name it is missing

3. probably better to rename -del to -delete to be consistent with other 
commands like -create and -info

4. when passing in invalid argument e.g. -info on a non-existing container, an 
exception will be displayed. We probably should not scare the users, and only 
display just one error message. And move the exception display to debug mode 
display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11679) Ozone: SCM CLI: Implement list container command

2017-06-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056453#comment-16056453
 ] 

Chen Liang commented on HDFS-11679:
---

Thanks [~yuanbo] for working on this, v003 patch looks pretty good to me. 

One minor thing, seems that if I do {{./bin/hdfs scm -container -list}} or 
{{./bin/hdfs scm -container -list -start XXX -prefix YYY}} (basically, without 
specifying -count), it will display nothing. My understanding is that this is 
because count is 0 by default, which is fine to me. But displaying nothing can 
be somewhat confusing, I'm thinking that, when the output is going to be empty, 
instead of showing nothing, we always display a message describing the possible 
reasons for empty output, such as "need to specify -count, or prefix not exist, 
or XYZ etc.". 

Also if we list on an empty db (i.e. list when no container exist at all), we 
get "Invalid start key, not found in current db." exception displayed. Maybe we 
want to catch this and display something more informative (e.g. no container 
exist).

> Ozone: SCM CLI: Implement list container command
> 
>
> Key: HDFS-11679
> URL: https://issues.apache.org/jira/browse/HDFS-11679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>  Labels: command-line
> Attachments: HDFS-11679-HDFS-7240.001.patch, 
> HDFS-11679-HDFS-7240.002.patch, HDFS-11679-HDFS-7240.003.patch
>
>
> Implement the command to list containers
> {code}
> hdfs scm -container list -start  [-count <100> | -end 
> ]{code}
> Lists all containers known to SCM. The option -start allows the listing to 
> start from a specified container and -count controls the number of entries 
> returned but it is mutually exclusive with the -end option which returns keys 
> from the -start to -end range.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12002:
--
Description: 
Currently there are a few minor issues with the SCM CLI:

1. some commands do not use -c option to take container name. an issue with 
this is that arguments need to be in a certain order to be correctly parsed, 
e.g.:
{{./bin/hdfs scm -container -del c0 -f}} works, but
{{./bin/hdfs scm -container -del -f c0}} will not
A more important thing is that, since -del requires the following argument 
being container name, if someone types {{./bin/hdfs scm -container -del -help}} 
it will be an error, while we probably want to display a help message instead.

2.some subcommands are not displaying the errors in the best way it could be, 
e.g.:
{{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
So cli complains 
{code}
Missing argument for option: del
Unrecognized options:[-container, -del]
usage: hdfs scm  []
where  can be one of the following
 -container   Container related options
{code}
but this does not really show that it is container name it is missing

3. probably better to rename -del to -delete to be consistent with other 
commands like -create and -info

4. when passing in invalid argument e.g. -info on a non-existing container, an 
exception will be displayed. We probably should not scare the users, and only 
display just one error message. And move the exception display to debug mode 
display or something.

  was:
Currently there are a few minor issues with the SCM CLI:

1. some commands do not use -c option to take container name. an issue with 
this is that arguments need to be in a certain order to be correctly parsed, 
e.g.:
{{./bin/hdfs scm -container -del c0 -f}} works, but
{{./bin/hdfs scm -container -del -f c0}} will not

2.some subcommands are not displaying the errors in the best way it could be, 
e.g.:
{{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
So cli complains 
{code}
Missing argument for option: del
Unrecognized options:[-container, -del]
usage: hdfs scm  []
where  can be one of the following
 -container   Container related options
{code}
but this does not really show that it is container name it is missing

3. probably better to rename -del to -delete to be consistent with other 
commands like -create and -info

4. when passing in invalid argument e.g. -info on a non-existing container, an 
exception will be displayed. We probably should not scare the users, and only 
display just one error message. And move the exception display to debug mode 
display or something.


> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info
> 4. when passing in invalid argument e.g. -info on a non-existing container, 
> an exception will be displayed. We probably should not scare the users, and 
> only display just one error message. And move the exception display to debug 
> mode display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11998:
--
Status: Patch Available  (was: Open)

> Enable DFSNetworkTopology as default
> 
>
> Key: HDFS-11998
> URL: https://issues.apache.org/jira/browse/HDFS-11998
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11998.001.patch
>
>
> HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
> uses {{NetworkTopology}} as default. 
> Given the stress testing in HDFS-11923 which shows the correctness of 
> DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
> DFSNetworkTopology can outperform NetworkTopology. I think we are at the 
> point where I can and should enable DFSNetworkTopology as default.
> Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11998:
--
Attachment: HDFS-11998.001.patch

Post initial patch to set the config value to true by default.

> Enable DFSNetworkTopology as default
> 
>
> Key: HDFS-11998
> URL: https://issues.apache.org/jira/browse/HDFS-11998
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11998.001.patch
>
>
> HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
> uses {{NetworkTopology}} as default. 
> Given the stress testing in HDFS-11923 which shows the correctness of 
> DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
> DFSNetworkTopology can outperform NetworkTopology. I think we are at the 
> point where I can and should enable DFSNetworkTopology as default.
> Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11580:
--
Attachment: (was: HDFS-11580-HDFS-7240.008.patch)

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11580-HDFS-7240.001.patch, 
> HDFS-11580-HDFS-7240.002.patch, HDFS-11580-HDFS-7240.003.patch, 
> HDFS-11580-HDFS-7240.004.patch, HDFS-11580-HDFS-7240.005.patch, 
> HDFS-11580-HDFS-7240.006.patch, HDFS-11580-HDFS-7240.007.patch
>
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-06-21 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058466#comment-16058466
 ] 

Chen Liang commented on HDFS-11580:
---

To [~linyiqun] and anyone reviewing this. There are several places with
{code}
try {
  validateContainerResponse(response);
} catch (StorageContainerException ignored) {
}
{code}
The {{StorageContainerException}} contains an error code which I think we 
should expose to caller somehow. I spent quite some time looking at how to 
expose exception from callbacks (e.g. {{thenApply}} we use here) but found 
nothing yet...alternatively we may be able to return the error code but not in 
an exception, but still not sure how to. Any thoughts?

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11580-HDFS-7240.001.patch, 
> HDFS-11580-HDFS-7240.002.patch, HDFS-11580-HDFS-7240.003.patch, 
> HDFS-11580-HDFS-7240.004.patch, HDFS-11580-HDFS-7240.005.patch, 
> HDFS-11580-HDFS-7240.006.patch, HDFS-11580-HDFS-7240.007.patch, 
> HDFS-11580-HDFS-7240.008.patch
>
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11580:
--
Attachment: HDFS-11580-HDFS-7240.008.patch

I looked deeper into v007 patch, looks pretty good to me overall, thanks 
[~linyiqun]!

Regarding the failure of {{testCorrectnessWithMultipleAsyncCalls}}, I took some 
time debugging it. Turns out there seems an issue in 
{{XceiverClientHandler#waitForResponse}}. Specifically, in the code
{code}
for (;;) {
  try {
ContainerProtos.ContainerCommandResponseProto curResponse;
// wait for the response
curResponse = responses.take();
// Check if current response is target response by comparing the
// traceID.
if (request.getTraceID().equals(curResponse.getTraceID())) {
  response = curResponse;
} else {
  pendingResponses.put(curResponse.getTraceID(), curResponse);
  // Try to get response from pending responses map and remove the
  // response in map.
  response = pendingResponses.remove(request.getTraceID());
}
  ...
}
{code}
Imagine the following case: two requests t1 and t2. t1 calls take(), but gets 
the response of t2 (async can be out of order for sure), so t1 inserts it to 
{{pendingResponses}}, t1 then goes back and calls {{take()}} again, and gets 
the response for t1 and return. Now t2 comes, t2 calls {{take()}}, then it 
blocks forever... Because its response is in {{pendingResponses}} already, if 
t2 is the last request, the take will never return. Basically, before calling 
take(), it should first check if it is in {{pendingResponses}} already or not. 

Post v008 patch (based on v007 patch) to resolve. Also I changed {{take()}} to 
{{poll()}} to avoid another tricky race condition. I also considered removing 
the {{pendingResponses}} variable but instead, inserting the unmatched response 
back to the {{responses}} queue. But I feel a hash map in theory should be 
better for performance here.

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11580-HDFS-7240.001.patch, 
> HDFS-11580-HDFS-7240.002.patch, HDFS-11580-HDFS-7240.003.patch, 
> HDFS-11580-HDFS-7240.004.patch, HDFS-11580-HDFS-7240.005.patch, 
> HDFS-11580-HDFS-7240.006.patch, HDFS-11580-HDFS-7240.007.patch, 
> HDFS-11580-HDFS-7240.008.patch
>
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11580:
--
Attachment: HDFS-11580-HDFS-7240.008.patch

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11580-HDFS-7240.001.patch, 
> HDFS-11580-HDFS-7240.002.patch, HDFS-11580-HDFS-7240.003.patch, 
> HDFS-11580-HDFS-7240.004.patch, HDFS-11580-HDFS-7240.005.patch, 
> HDFS-11580-HDFS-7240.006.patch, HDFS-11580-HDFS-7240.007.patch, 
> HDFS-11580-HDFS-7240.008.patch
>
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11998:
--
Attachment: HDFS-11998.002.patch

Thanks [~linyiqun] for the catch! updated in v002 patch which also made the 
changes of:

1. {{TestAvailableSpaceBlockPlacementPolicy}} is fixed like mentioned earlier
2. {{TestReplicationPolicyWithNodeGroup}} disables {{DFSNetworkTopology}}. This 
because this test is written to test the class 
{{NetworkTopologyWithNodeGroup}}, which is another extension of 
{{NetworkTopology}}. Enable DFSNetworkTopology for this test will have the test 
run against DFSNetworkTopology, while it is meant to be running on 
NetworkTopologyWithNodeGroup. So disabling DFSNetworkTopology as default for 
this particular test.

> Enable DFSNetworkTopology as default
> 
>
> Key: HDFS-11998
> URL: https://issues.apache.org/jira/browse/HDFS-11998
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11998.001.patch, HDFS-11998.002.patch
>
>
> HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
> uses {{NetworkTopology}} as default. 
> Given the stress testing in HDFS-11923 which shows the correctness of 
> DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
> DFSNetworkTopology can outperform NetworkTopology. I think we are at the 
> point where I can and should enable DFSNetworkTopology as default.
> Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059747#comment-16059747
 ] 

Chen Liang commented on HDFS-12008:
---

Hi [~kihwal] seem the 
{{TestAvailableSpaceBlockPlacementPolicy#testChooseTarget}} has a very specific 
assertion check that, among the two selected node, the one with higher 
availability will be chosen with probability in range 0.52 to 0.55. Namely, 
when two nodes selected, there is still a fair chance that the one with lower 
availability gets choson. And the probability in the assertion can be easily 
violated when making change to {{AvailableSpaceBlockPlacementPolicy}}. I 
haven't digged into it, but HDFS-8131 seems to have some mathematical proof on 
this.

The change in this JIRA makes sense to me though, just want to make sure we 
don't lose anything important we get from this two-node-based-selection 
introduced in HDFS-8131.

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060053#comment-16060053
 ] 

Chen Liang commented on HDFS-12008:
---

thanks [~kihwal] for the illustration!

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060005#comment-16060005
 ] 

Chen Liang commented on HDFS-12008:
---

Jenkins reports a strange "null" in assertion error, and when I try the patch 
locally, I got assertion fail on this same assertion, but because the value of 
possibility violates the assertion, not because of that null error. There were 
also times it just passed. A little confused though, what did you mean by it is 
a "wrong behavior"? [~kihwal]

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061597#comment-16061597
 ] 

Chen Liang commented on HDFS-12028:
---

Thanks [~xyao] for filing this! I did some initial investigation by looking 
into output of "mvn dependency:tree". I think this is caused by jscsi adding 
this {{logback-classic-1.0.10.jar}} to its dependency. To resolve this I think 
we need to add {{...}} to certain pom files, I haven't 
looked into which and where exactly to add though.

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12018:
--
Status: Patch Available  (was: Open)

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12018-HDFS-7240.001.patch
>
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12018:
--
Attachment: HDFS-12018-HDFS-7240.001.patch

Post initial patch, also removed a few unused keys in CBlockConfigKeys.

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12018-HDFS-7240.001.patch
>
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061279#comment-16061279
 ] 

Chen Liang commented on HDFS-11993:
---

Thanks [~candychencan] for the patch. Since this seems to be slf4j logger, how 
about considering using {} placeholders? e.g.

{code}
DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block " + 
targetBlock.getBlock() + ", add to deadNodes and continue. " + ex, ex);
{code}

to something like

{code}
DFSClient.LOG.warn("Failed to connect to {} for block {}, add to deadNodes and 
continue. ", targetAddr, targetBlock.getBlock(), ex);
{code}

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12018:
-

Assignee: Chen Liang

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12028:
--
Attachment: HDFS-12028-HDFS-7240.001.patch

Post v001 patch.

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12028:
-

Assignee: Chen Liang

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12028:
--
Status: Patch Available  (was: Open)

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12041) Block Storage : make the server address config more concise

2017-06-26 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12041:
-

 Summary: Block Storage : make the server address config more 
concise
 Key: HDFS-12041
 URL: https://issues.apache.org/jira/browse/HDFS-12041
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Priority: Minor


Currently there are a few places where the address are read from config like 
such 
{code}
String cbmIPAddress = ozoneConf.get(
DFS_CBLOCK_JSCSI_CBLOCK_SERVER_ADDRESS_KEY,
DFS_CBLOCK_JSCSI_CBLOCK_SERVER_ADDRESS_DEFAULT
);
int cbmPort = ozoneConf.getInt(
DFS_CBLOCK_JSCSI_PORT_KEY,
DFS_CBLOCK_JSCSI_PORT_DEFAULT
);
{code}
Similarly for jscsi address config. Maybe we should consider merge these to one 
single key config in form of host:port.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12018:
--
Attachment: HDFS-12018-HDFS-7240.002.patch

Thanks [~xyao] for the comments! Post v002 patch. The last two changes you 
suggested requires additional change to a couple places in cblock code, I have 
filed HDFS-12041 to follow up. The other comments are addressed.

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12018-HDFS-7240.001.patch, 
> HDFS-12018-HDFS-7240.002.patch
>
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12018:
--
Attachment: HDFS-12018-HDFS-7240.003.patch

Post v003 patch to rebase.

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12018-HDFS-7240.001.patch, 
> HDFS-12018-HDFS-7240.002.patch, HDFS-12018-HDFS-7240.003.patch
>
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-26 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16063457#comment-16063457
 ] 

Chen Liang commented on HDFS-12028:
---

Thanks [~xyao] for the comments. [~cheersyang] it seems to have resolved in my 
environment, could you share a little bit more about how you got this? (e.g. 
how and which commands you ran etc). Also, you may need to mvn clean and build 
the whole thing again to make the patch take effect.

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12043) Add counters for block re-replication

2017-06-26 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12043:
-

 Summary: Add counters for block re-replication
 Key: HDFS-12043
 URL: https://issues.apache.org/jira/browse/HDFS-12043
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chen Liang
Assignee: Chen Liang


We occasionally see that the under-replicated block count is not going down 
quickly enough. We've made at least one fix to speed up block replications 
(HDFS-9205) but we need better insight into the current state and activity of 
the block re-replication logic. For example, we need to understand whether is 
it because re-replication is not making forward progress at all, or is it 
because new under-replicated blocks are being added faster.

We should include additional metrics:
# Cumulative number of blocks that were successfully replicated. 
# Cumulative number of re-replications that timed out.
# Cumulative number of blocks that were dequeued for re-replication but not 
scheduled e.g. because they were invalid, or under-construction or replication 
was postponed.
 
The growth rate of of the above metrics will make it clear whether block 
replication is making forward progress and if not then provide potential clues 
about why it is stalled.

Thanks [~arpitagarwal] for the offline discussions.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11996:
-

 Summary: Ozone : add partial read of chunks
 Key: HDFS-11996
 URL: https://issues.apache.org/jira/browse/HDFS-11996
 Project: Hadoop HDFS
  Issue Type: Sub-task
 Environment: Currently when reading a chunk, it is always the whole 
chunk that gets returned. However it is possible the reader may only need to 
read a subset of the chunk. This JIRA adds the partial read of chunks.
Reporter: Chen Liang
Assignee: Chen Liang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054713#comment-16054713
 ] 

Chen Liang commented on HDFS-11963:
---

Thanks [~anu] for the update, v005 patch LGTM.

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, HDFS-11963-HDFS-7240.005.patch, Screen Shot 
> 2017-06-11 at 12.11.06 AM.png, Screen Shot 2017-06-11 at 12.11.19 AM.png, 
> Screen Shot 2017-06-11 at 12.11.32 AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054494#comment-16054494
 ] 

Chen Liang commented on HDFS-11992:
---

Thanks [~xiaodong.hu] for the contribution! v001 patch LGTM.

> Replace commons-logging APIs with slf4j in FsDatasetImpl
> 
>
> Key: HDFS-11992
> URL: https://issues.apache.org/jira/browse/HDFS-11992
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Akira Ajisaka
>Assignee: hu xiaodong
> Attachments: HDFS-11992.001.patch
>
>
> {{FsDatasetImpl.LOG}} is widely used and this will change the APIs of 
> InstrumentedLock and InstrumentedWriteLock, so this issue is to change only 
> {{FsDatasetImpl.LOG}} and other related APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11997) ChunkManager functions do not use the argument keyName

2017-06-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11997:
-

 Summary: ChunkManager functions do not use the argument keyName
 Key: HDFS-11997
 URL: https://issues.apache.org/jira/browse/HDFS-11997
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


{{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} 
{{deleteChunk}} all take a {{keyName}} argument, which is not being used by any 
of them.

I think this makes sense because conceptually {{ChunkManager}} should not have 
to know keyName to do anything, probably except for some sort of sanity check 
or logging, which is not there either. We should revisit whether we need it 
here. I think we should remove it to make the Chunk syntax, and the function 
signatures more cleanly abstracted.

Any comments? [~anu]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11997) ChunkManager functions do not use the argument keyName

2017-06-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054779#comment-16054779
 ] 

Chen Liang commented on HDFS-11997:
---

I think from the perspective of abstraction, {{ChunkManager}} should work 
(read/write/delete a chunk) given just the metadata of the chunk. This is not 
causing any issue for now and will more likely never, but I felt having this 
field but not being used causes confusions. I simply didn't see any cases where 
this field should be used by ChunkManager as part of any of the operations. In 
fact, an implementation of chunk manager that relies on key name seems breaking 
the abstraction in some way to me...

> ChunkManager functions do not use the argument keyName
> --
>
> Key: HDFS-11997
> URL: https://issues.apache.org/jira/browse/HDFS-11997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>
> {{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} 
> {{deleteChunk}} all take a {{keyName}} argument, which is not being used by 
> any of them.
> I think this makes sense because conceptually {{ChunkManager}} should not 
> have to know keyName to do anything, probably except for some sort of sanity 
> check or logging, which is not there either. We should revisit whether we 
> need it here. I think we should remove it to make the Chunk syntax, and the 
> function signatures more cleanly abstracted.
> Any comments? [~anu]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11998:
-

 Summary: Enable DFSNetworkTopology as default
 Key: HDFS-11998
 URL: https://issues.apache.org/jira/browse/HDFS-11998
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
uses {{NetworkTopology}} as default. 

Given the stress testing in HDFS-11923 which shows the correctness of 
DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
DFSNetworkTopology can outperform NetworkTopology. I think we are at the point 
where I can and should enable DFSNetworkTopology as default.

Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11996:
--
Status: Patch Available  (was: Open)

> Ozone : add partial read of chunks
> --
>
> Key: HDFS-11996
> URL: https://issues.apache.org/jira/browse/HDFS-11996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
> Environment: Currently when reading a chunk, it is always the whole 
> chunk that gets returned. However it is possible the reader may only need to 
> read a subset of the chunk. This JIRA adds the partial read of chunks.
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11996-HDFS-7240.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11996:
--
Attachment: HDFS-11996-HDFS-7240.001.patch

Turns out it is already possible to do partial read of a chunk. So instead of 
making any actual change, adding a unit test to illustrate and verify how it 
works. 

> Ozone : add partial read of chunks
> --
>
> Key: HDFS-11996
> URL: https://issues.apache.org/jira/browse/HDFS-11996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
> Environment: Currently when reading a chunk, it is always the whole 
> chunk that gets returned. However it is possible the reader may only need to 
> read a subset of the chunk. This JIRA adds the partial read of chunks.
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11996-HDFS-7240.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11775) Ozone: KSM : add createBucket

2017-05-18 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016111#comment-16016111
 ] 

Chen Liang commented on HDFS-11775:
---

Thanks [~nandakumar131] for updating the patch! v007 patch LGTM, +1

> Ozone: KSM : add createBucket 
> --
>
> Key: HDFS-11775
> URL: https://issues.apache.org/jira/browse/HDFS-11775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Nandakumar
> Attachments: HDFS-11775-HDFS-7240.000.patch, 
> HDFS-11775-HDFS-7240.001.patch, HDFS-11775-HDFS-7240.002.patch, 
> HDFS-11775-HDFS-7240.003.patch, HDFS-11775-HDFS-7240.004.patch, 
> HDFS-11775-HDFS-7240.005.patch, HDFS-11775-HDFS-7240.006.patch, 
> HDFS-11775-HDFS-7240.007.patch
>
>
> Creates a bucket if it does not exist. A precondition to creating a bucket is 
> that a parent volume must exist.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11835) Block Storage: Overwrite of blocks fails

2017-05-18 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016140#comment-16016140
 ] 

Chen Liang commented on HDFS-11835:
---

Thanks [~msingh] for the patch! The changes LGTM, but the failed cblock test 
seems related, could you please verify that?

> Block Storage: Overwrite of blocks fails
> 
>
> Key: HDFS-11835
> URL: https://issues.apache.org/jira/browse/HDFS-11835
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11835-HDFS-7240.001.patch
>
>
> Overwrite of blocks fails, because "OverWriteRequested" flag is not set 
> during chunk creation.
> {code}
> 2017-05-16 22:33:23,909 [nioEventLoopGroup-5-2] ERROR  - Rejecting write 
> chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='1_chunk, offset=0, len=
> 11933}
> {code}
> This flag needs to be set here 
> {code}
> public static void writeSmallFile(XceiverClientSpi client, String 
> containerName,
>   String key, byte[] data, String traceID) throws IOException {
> .
> ChunkInfo chunk = ChunkInfo
> .newBuilder()
> .setChunkName(key + "_chunk")
> .setOffset(0)
> .setLen(data.length)
> .build();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11850) Ozone: Stack Overflow in XceiverClientManager because of race condition in accessing openClient

2017-05-22 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11850:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the feature branch.

> Ozone: Stack Overflow in XceiverClientManager because of race condition in 
> accessing openClient
> ---
>
> Key: HDFS-11850
> URL: https://issues.apache.org/jira/browse/HDFS-11850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11850-HDFS-7240.001.patch
>
>
> There is a possible race condition in accessing the open client has, it is 
> because of unlocked access of the hash in acquireClient.
> This can cause stack overflow and also leaking client in all probabilities
> {code}
> at 
> com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3019)
> at com.google.common.cache.LocalCache.put(LocalCache.java:4365)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
> at 
> org.apache.hadoop.scm.XceiverClientManager$1.onRemoval(XceiverClientManager.java:85)
> at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1966)
> at 
> com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3650)
> at 
> com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3626)
> at 
> com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3019)
> at com.google.common.cache.LocalCache.put(LocalCache.java:4365)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
> at 
> org.apache.hadoop.scm.XceiverClientManager$1.onRemoval(XceiverClientManager.java:85)
> at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1966)
> at 
> com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3650)
> at 
> com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3626)
> at 
> com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3019)
> at com.google.common.cache.LocalCache.put(LocalCache.java:4365)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
> at 
> org.apache.hadoop.scm.XceiverClientManager$1.onRemoval(XceiverClientManager.java:85)
> at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1966)
> at 
> com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3650)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11850) Ozone: Stack Overflow in XceiverClientManager because of race condition in accessing openClient

2017-05-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16019839#comment-16019839
 ] 

Chen Liang commented on HDFS-11850:
---

Thanks [~msingh] for working on this! v001 patch LGTM, will commit this shortly.

> Ozone: Stack Overflow in XceiverClientManager because of race condition in 
> accessing openClient
> ---
>
> Key: HDFS-11850
> URL: https://issues.apache.org/jira/browse/HDFS-11850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11850-HDFS-7240.001.patch
>
>
> There is a possible race condition in accessing the open client has, it is 
> because of unlocked access of the hash in acquireClient.
> This can cause stack overflow and also leaking client in all probabilities
> {code}
> at 
> com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3019)
> at com.google.common.cache.LocalCache.put(LocalCache.java:4365)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
> at 
> org.apache.hadoop.scm.XceiverClientManager$1.onRemoval(XceiverClientManager.java:85)
> at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1966)
> at 
> com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3650)
> at 
> com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3626)
> at 
> com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3019)
> at com.google.common.cache.LocalCache.put(LocalCache.java:4365)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
> at 
> org.apache.hadoop.scm.XceiverClientManager$1.onRemoval(XceiverClientManager.java:85)
> at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1966)
> at 
> com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3650)
> at 
> com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3626)
> at 
> com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3019)
> at com.google.common.cache.LocalCache.put(LocalCache.java:4365)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
> at 
> org.apache.hadoop.scm.XceiverClientManager$1.onRemoval(XceiverClientManager.java:85)
> at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1966)
> at 
> com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3650)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11535) Performance analysis of new DFSNetworkTopology#chooseRandom

2017-05-22 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11535:
--
Attachment: HDFS-11535.004.patch

Thanks [~arpitagarwal] for the comments! Post v004 patch with a number of style 
updates.

> Performance analysis of new DFSNetworkTopology#chooseRandom
> ---
>
> Key: HDFS-11535
> URL: https://issues.apache.org/jira/browse/HDFS-11535
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11535.001.patch, HDFS-11535.002.patch, 
> HDFS-11535.003.patch, HDFS-11535.004.patch, PerfTest.pdf
>
>
> This JIRA is created to post the results of some performance experiments we 
> did.  For those who are interested, please the attached .pdf file for more 
> detail. The attached patch file includes the experiment code we ran. 
> The key insights we got from these tests is that: although *the new method 
> outperforms the current one in most cases*. There is still *one case where 
> the current one is better*. Which is when there is only one storage type in 
> the cluster, and we also always look for this storage type. In this case, it 
> is simply a waste of time to perform storage-type-based pruning, blindly 
> picking up a random node (current methods) would suffice.
> Therefore, based on the analysis, we propose to use a *combination of both 
> the old and the new methods*:
> say, we search for a node of type X, since now inner node all keep storage 
> type info, we can *just check root node to see if X is the only type it has*. 
> If yes, blindly picking a random leaf will work, so we simply call the old 
> method, otherwise we call the new method.
> There is still at least one missing piece in this performance test, which is 
> garbage collection. The new method does a few more object creation when doing 
> the search, which adds overhead to GC. I'm still thinking of any potential 
> optimization but this seems tricky, also I'm not sure whether this 
> optimization worth doing at all. Please feel free to leave any 
> comments/suggestions.
> Thanks [~arpitagarwal] and [~szetszwo] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11860) Ozone: SCM: SCMContainerPlacementCapacity#chooseNode sometimes does not remove chosen node from healthy list.

2017-05-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16019827#comment-16019827
 ] 

Chen Liang commented on HDFS-11860:
---

Thanks [~xyao] for the debugging and the fix! v001 patch LGTM with that 
checkstyle warning fixed. I haven't looked into the failed tests, seems could 
be potentially related though. Could you please verify that?

> Ozone: SCM: SCMContainerPlacementCapacity#chooseNode sometimes does not 
> remove chosen node from healthy list.
> -
>
> Key: HDFS-11860
> URL: https://issues.apache.org/jira/browse/HDFS-11860
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11860-HDFS-7240.001.patch
>
>
> This was caught in Jenkins run randomly. After debugging, found the cause is 
> the 
> logic when two random index happens to be the same below where the node id 
> was returned without being removed from the healthy list for next round of 
> selection. As a result, there could be duplicated datanodes chosen for the 
> pipeline and the machine list size smaller than expected. I will post a fix 
> soon. 
> {code}
> SCMContainerPlacementCapacity#chooseNode
>  // There is a possibility that both numbers will be same.
>  // if that is so, we just return the node.
>  if (firstNodeNdx == secondNodeNdx) {
>   return healthyNodes.get(firstNodeNdx);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11859) Ozone : separate blockLocationProtocol out of containerLocationProtocol

2017-05-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021468#comment-16021468
 ] 

Chen Liang commented on HDFS-11859:
---

Thanks [~xyao] for updating the patch! +1 on v007 patch. 

> Ozone : separate blockLocationProtocol out of containerLocationProtocol
> ---
>
> Key: HDFS-11859
> URL: https://issues.apache.org/jira/browse/HDFS-11859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11859-HDFS-7240.001.patch, 
> HDFS-11859-HDFS-7240.002.patch, HDFS-11859-HDFS-7240.003.patch, 
> HDFS-11859-HDFS-7240.004.patch, HDFS-11859-HDFS-7240.005.patch, 
> HDFS-11859-HDFS-7240.006.patch, HDFS-11859-HDFS-7240.007.patch
>
>
> Currently StorageLocationProtcol contains two types of operations: container 
> related operations and block related operations. Although there is 
> {{ScmBlockLocationProtocol}} for block operations, only 
> {{StorageContainerLocationProtocolServerSideTranslatorPB}} is making the 
> distinguish. 
> This JIRA tries to make the separation complete and thorough for all places.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11727) Block Storage: Retry Blocks should be requeued when cblock is restarted

2017-05-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021542#comment-16021542
 ] 

Chen Liang commented on HDFS-11727:
---

v002 patch looks pretty good to me, will commit this shortly. This is a fairly 
complex change, thanks [~msingh] for the contribution!

> Block Storage: Retry Blocks should be requeued when cblock is restarted
> ---
>
> Key: HDFS-11727
> URL: https://issues.apache.org/jira/browse/HDFS-11727
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11727-HDFS-7240.001.patch, 
> HDFS-11727-HDFS-7240.002.patch
>
>
> Currently blocks which could not written to container because of some issue
> are maintained in retryLog files. However these files are not requeued back 
> after restart.
> This change will requeue retry log files on restart and would also fix some 
> other minor issues with retry logs and add some new counters.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11872) Ozone : implement StorageContainerManager#getStorageContainerLocations

2017-05-23 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11872:
-

 Summary: Ozone : implement 
StorageContainerManager#getStorageContainerLocations
 Key: HDFS-11872
 URL: https://issues.apache.org/jira/browse/HDFS-11872
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Chen Liang
Assignee: Chen Liang


We should implement {{StorageContainerManager#getStorageContainerLocations}} . 

Although the comment says it will be moved to KSM, the functionality of 
container lookup by name it should actually be part of SCM functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Status: Patch Available  (was: In Progress)

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: HDFS-11780-HDFS-7240.002.patch

I'm taking back the statement that this requires HDFS-11872 to gets in first. I 
was confused by the purpose of {{getStorageContainerLocations}}. 

Post 002 patch, with a test changed. The test tries to write to the stream for 
putKey. But I found it hard to verify the writes since getKey is not 
implemented.

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11780 started by Chen Liang.
-
> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: HDFS-11780-HDFS-7240.001.patch

Attach initial patch. 

Please note that the test is incomplete, it will not work until HDFS-11872 is 
fixed. I'm leaving this JIRA as in-progress for now for early code reviews. 
Will update the status after HDFS-11872 gets in.

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11872) Ozone : implement StorageContainerManager#getStorageContainerLocations

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang resolved HDFS-11872.
---
Resolution: Won't Fix

I misread {{getStorageContainerLocations}} as the lookup of container given 
container's name. But it turns out this is look up container given a specific 
key. In this case this should probably indeed move to KSM. May need to revisit 
this later, but will not 'fix' this for the time being.

> Ozone : implement StorageContainerManager#getStorageContainerLocations
> --
>
> Key: HDFS-11872
> URL: https://issues.apache.org/jira/browse/HDFS-11872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
>
> We should implement {{StorageContainerManager#getStorageContainerLocations}} 
> . 
> Although the comment says it will be moved to KSM, the functionality of 
> container lookup by name it should actually be part of SCM functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: HDFS-11780-HDFS-7240.002.patch

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: (was: HDFS-11780-HDFS-7240.002.patch)

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Status: Open  (was: Patch Available)

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Status: Patch Available  (was: Open)

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11886) Ozone : improve error handling for putkey operation

2017-05-26 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026534#comment-16026534
 ] 

Chen Liang commented on HDFS-11886:
---

Thanks [~anu] for looking at this! No decision has been made at all for this 
JIRA, any thoughts are more than welcome.

To make sure we are on the same page, did you mean maybe we can have the client 
send a "commit" message to KSM after the key is written to datanode, only then 
KSM writes that to ksm.db? 

If I understand this correctly, I think one thing with this way is that for any 
successful putKey, there will always be two calls to KSM guaranteed, one to 
allocate block, the other to commit the key. If putKey failed, there will be no 
commit and only the first call. While for the revert-failed-key approach, there 
is always one call to KSM for successful putKey (which is to allocate block), 
but two calls to KSM for failed putKey (revert the key). If assuming putKey is 
more likely to succeed then fail, this seems to me a +1 for revert-fail.

However, another thing, is how can we be sure a key is finalized after all. For 
the commit-success approach, seems easy: unless that success flag is set, the 
key is considered not ready (similar to under construction), but for 
revert-failure approach, there will be temporary window where a key actually 
failed, but before it is reverted, it has already been read by someone.  So 
this seems a +1 for commit-success approach.

In short, this probably comes down to do we favor less RPC calls? or do we 
favor reliable getKey at any time?

> Ozone : improve error handling for putkey operation
> ---
>
> Key: HDFS-11886
> URL: https://issues.apache.org/jira/browse/HDFS-11886
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>
> Ozone's putKey operations involve a couple steps:
> 1. KSM calls allocateBlock to SCM, writes this info to KSM's local metastore
> 2. allocatedBlock gets returned to client, client checks to see if container 
> needs to be created on datanode, if yes, create the container
> 3. writes the data to container.
> it is possible that 1 succeeded, but 2 or 3 failed, in this case there will 
> be an entry in KSM's local metastore, but the key is actually nowhere to be 
> found. We need to revert 1 is 2 or 3 failed in this case. 
> To resolve this, we need at least two things to be implemented first.
> 1. We need deleteKey() to be added KSM first. 
> 2. We also need container reports to be implemented first such that SCM can 
> track whether the container is actually added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11853) Ozone: KSM: Add getKey

2017-05-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11853:
--
Attachment: HDFS-11853-HDFS-7240.002.patch

The failed test {{TestKeySpaceManager}} is related, fixed in v002 patch.

> Ozone: KSM: Add getKey 
> ---
>
> Key: HDFS-11853
> URL: https://issues.apache.org/jira/browse/HDFS-11853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11853-HDFS-7240.001.patch, 
> HDFS-11853-HDFS-7240.002.patch
>
>
> Support read the content (object) of the key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11891) DU#refresh should print the path of the directory when an exception is caught

2017-05-26 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11891:
-

 Summary: DU#refresh should print the path of the directory when an 
exception is caught
 Key: HDFS-11891
 URL: https://issues.apache.org/jira/browse/HDFS-11891
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Liang
Assignee: Chen Liang
Priority: Minor


the refresh() method DU is as follows, 
{code}
  @Override
  protected synchronized void refresh() {
try {
  duShell.startRefresh();
} catch (IOException ioe) {
  LOG.warn("Could not get disk usage information", ioe);
}
  }
{code}
the log warning message should also be printing out the directory that failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11853) Ozone: KSM: Add getKey

2017-05-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11853:
--
Attachment: HDFS-11853-HDFS-7240.001.patch

Post initial v001 patch. This patch also includes a number of misc changes 
related to key operations. e.g. rename "keyBlock" to just "key", add exception 
code etc.

> Ozone: KSM: Add getKey 
> ---
>
> Key: HDFS-11853
> URL: https://issues.apache.org/jira/browse/HDFS-11853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11853-HDFS-7240.001.patch
>
>
> Support read the content (object) of the key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11891) DU#refresh should print the path of the directory when an exception is caught

2017-05-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11891:
--
Status: Patch Available  (was: Open)

> DU#refresh should print the path of the directory when an exception is caught
> -
>
> Key: HDFS-11891
> URL: https://issues.apache.org/jira/browse/HDFS-11891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11891.001.patch
>
>
> the refresh() method DU is as follows, 
> {code}
>   @Override
>   protected synchronized void refresh() {
> try {
>   duShell.startRefresh();
> } catch (IOException ioe) {
>   LOG.warn("Could not get disk usage information", ioe);
> }
>   }
> {code}
> the log warning message should also be printing out the directory that failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11891) DU#refresh should print the path of the directory when an exception is caught

2017-05-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11891:
--
Attachment: HDFS-11891.001.patch

> DU#refresh should print the path of the directory when an exception is caught
> -
>
> Key: HDFS-11891
> URL: https://issues.apache.org/jira/browse/HDFS-11891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11891.001.patch
>
>
> the refresh() method DU is as follows, 
> {code}
>   @Override
>   protected synchronized void refresh() {
> try {
>   duShell.startRefresh();
> } catch (IOException ioe) {
>   LOG.warn("Could not get disk usage information", ioe);
> }
>   }
> {code}
> the log warning message should also be printing out the directory that failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11853) Ozone: KSM: Add getKey

2017-05-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11853:
--
Status: Patch Available  (was: Open)

> Ozone: KSM: Add getKey 
> ---
>
> Key: HDFS-11853
> URL: https://issues.apache.org/jira/browse/HDFS-11853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11853-HDFS-7240.001.patch
>
>
> Support read the content (object) of the key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11853) Ozone: KSM: Add getKey

2017-05-26 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027049#comment-16027049
 ] 

Chen Liang edited comment on HDFS-11853 at 5/27/17 12:19 AM:
-

Thanks [~xyao] for the comments! all addressed in v003 patch.

I indeed wanted to put {{containerKey}}. Somehow seems {{args.getKey()}} also 
worked. Actually, seems that the only thing that matters is setting {{KeyData}} 
correctly, whatever string passed into {{LengthInputStream}} does not make a 
difference...


was (Author: vagarychen):
Thanks [~xyao] for the comments! all addressed in v003 patch. I indeed wanted 
to put {{containerKey}}. Somehow seems {{args.getKey()}} also worked...

> Ozone: KSM: Add getKey 
> ---
>
> Key: HDFS-11853
> URL: https://issues.apache.org/jira/browse/HDFS-11853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11853-HDFS-7240.001.patch, 
> HDFS-11853-HDFS-7240.002.patch, HDFS-11853-HDFS-7240.003.patch
>
>
> Support read the content (object) of the key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11853) Ozone: KSM: Add getKey

2017-05-26 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11853:
--
Attachment: HDFS-11853-HDFS-7240.003.patch

Thanks [~xyao] for the comments! all addressed in v003 patch. I indeed wanted 
to put {{containerKey}}. Somehow seems {{args.getKey()}} also worked...

> Ozone: KSM: Add getKey 
> ---
>
> Key: HDFS-11853
> URL: https://issues.apache.org/jira/browse/HDFS-11853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11853-HDFS-7240.001.patch, 
> HDFS-11853-HDFS-7240.002.patch, HDFS-11853-HDFS-7240.003.patch
>
>
> Support read the content (object) of the key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11886) Ozone : improve error handling for putkey operation

2017-05-25 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11886:
--
Summary: Ozone : improve error handling for putkey operation  (was: Ozone : 
improving error handling for putkey operation)

> Ozone : improve error handling for putkey operation
> ---
>
> Key: HDFS-11886
> URL: https://issues.apache.org/jira/browse/HDFS-11886
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>
> Ozone's putKey operations involve a couple steps:
> 1. KSM calls allocateBlock to SCM, writes this info to KSM's local metastore
> 2. allocatedBlock gets returned to client, client checks to see if container 
> needs to be created on datanode, if yes, create the container
> 3. writes the data to container.
> it is possible that 1 succeeded, but 2 or 3 failed, in this case there will 
> be an entry in KSM's local metastore, but the key is actually nowhere to be 
> found. We need to revert 1 is 2 or 3 failed in this case. 
> To resolve this, we need at least two things to be implemented first.
> 1. We need deleteKey() to be added KSM first. 
> 2. We also need container reports to be implemented first such that SCM can 
> track whether the container is actually added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11886) Ozone : improving error handling for putkey operation

2017-05-25 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11886:
--
Description: 
Ozone's putKey operations involve a couple steps:
1. KSM calls allocateBlock to SCM, writes this info to KSM's local metastore
2. allocatedBlock gets returned to client, client checks to see if container 
needs to be created on datanode, if yes, create the container
3. writes the data to container.
it is possible that 1 succeeded, but 2 or 3 failed, in this case there will be 
an entry in KSM's local metastore, but the key is actually nowhere to be found. 
We need to revert 1 is 2 or 3 failed in this case. 

To resolve this, we need at least two things to be implemented first.
1. We need deleteKey() to be added KSM first. 
2. We also need container reports to be implemented first such that SCM can 
track whether the container is actually added.



  was:
Ozone's putKey operations involve a couple steps:
1. KSM calls allocateBlock to SCM, writes this info to KSM's local metastore
2. allocatedBlock gets returned to client, client checks to see if container 
needs to be created on datanode, if yes, create the container
3. writes the data to container.

it is possible that 1 succeeded, but 2 or 3 failed, in this case there will be 
an entry in KSM's local metastore, but the key is actually nowhere to be found. 
We need to revert 1 is 2 or 3 failed in this case. This can be done with a 
deleteKey() call to KSM.




> Ozone : improving error handling for putkey operation
> -
>
> Key: HDFS-11886
> URL: https://issues.apache.org/jira/browse/HDFS-11886
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>
> Ozone's putKey operations involve a couple steps:
> 1. KSM calls allocateBlock to SCM, writes this info to KSM's local metastore
> 2. allocatedBlock gets returned to client, client checks to see if container 
> needs to be created on datanode, if yes, create the container
> 3. writes the data to container.
> it is possible that 1 succeeded, but 2 or 3 failed, in this case there will 
> be an entry in KSM's local metastore, but the key is actually nowhere to be 
> found. We need to revert 1 is 2 or 3 failed in this case. 
> To resolve this, we need at least two things to be implemented first.
> 1. We need deleteKey() to be added KSM first. 
> 2. We also need container reports to be implemented first such that SCM can 
> track whether the container is actually added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-25 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: HDFS-11780-HDFS-7240.008.patch

Thanks [~xyao] for the comments! post v008 patch. Also filed HDFS-11886 to 
track the error handling issues.

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch, HDFS-11780-HDFS-7240.003.patch, 
> HDFS-11780-HDFS-7240.004.patch, HDFS-11780-HDFS-7240.005.patch, 
> HDFS-11780-HDFS-7240.006.patch, HDFS-11780-HDFS-7240.007.patch, 
> HDFS-11780-HDFS-7240.008.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-25 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: HDFS-11780-HDFS-7240.007.patch

Earlier patches were passing the wrong first argument to {{ChunkOutputStream}} 
constructor, post v007 patch to fix this.
(I have to say the variable name "containerKey" is pretty ambiguous...)

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch, HDFS-11780-HDFS-7240.003.patch, 
> HDFS-11780-HDFS-7240.004.patch, HDFS-11780-HDFS-7240.005.patch, 
> HDFS-11780-HDFS-7240.006.patch, HDFS-11780-HDFS-7240.007.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11886) Ozone : improving error handling for putkey operation

2017-05-25 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11886:
-

 Summary: Ozone : improving error handling for putkey operation
 Key: HDFS-11886
 URL: https://issues.apache.org/jira/browse/HDFS-11886
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Chen Liang


Ozone's putKey operations involve a couple steps:
1. KSM calls allocateBlock to SCM, writes this info to KSM's local metastore
2. allocatedBlock gets returned to client, client checks to see if container 
needs to be created on datanode, if yes, create the container
3. writes the data to container.

it is possible that 1 succeeded, but 2 or 3 failed, in this case there will be 
an entry in KSM's local metastore, but the key is actually nowhere to be found. 
We need to revert 1 is 2 or 3 failed in this case. This can be done with a 
deleteKey() call to KSM.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11822) Block Storage: Fix TestCBlockCLI, failing because of " Address already in use"

2017-05-24 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023901#comment-16023901
 ] 

Chen Liang commented on HDFS-11822:
---

Thanks [~msingh] for the patch! One question though: looks like 
{{cblockServiceRpcAddress}} and {{cblockServerRpcAddress}} in 
{{CBlockManager.java}} are only used to print the two log messages? In which 
case seems no need to have them as class member variables. Or is there anything 
missing here?

> Block Storage: Fix TestCBlockCLI, failing because of " Address already in use"
> --
>
> Key: HDFS-11822
> URL: https://issues.apache.org/jira/browse/HDFS-11822
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11822-HDFS-7240.001.patch, 
> HDFS-11822-HDFS-7240.002.patch
>
>
> TestCBlockCLI is failing because of bind error.
> https://builds.apache.org/job/PreCommit-HDFS-Build/19429/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {code}
> org.apache.hadoop.cblock.TestCBlockCLI  Time elapsed: 0.668 sec  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:9810] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:543)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1033)
>   at org.apache.hadoop.ipc.Server.(Server.java:2791)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:960)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:420)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:341)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:802)
>   at 
> org.apache.hadoop.cblock.CBlockManager.startRpcServer(CBlockManager.java:215)
>   at org.apache.hadoop.cblock.CBlockManager.(CBlockManager.java:131)
>   at org.apache.hadoop.cblock.TestCBlockCLI.setup(TestCBlockCLI.java:57)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-25 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11780:
--
Attachment: HDFS-11780-HDFS-7240.004.patch

Thanks [~xyao] for the comments as well as the offline discussion! All the 
comments are addressed in v004 patch. 

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-11780-HDFS-7240.001.patch, 
> HDFS-11780-HDFS-7240.002.patch, HDFS-11780-HDFS-7240.003.patch, 
> HDFS-11780-HDFS-7240.004.patch
>
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11832) Switch leftover logs to slf4j format in BlockManager.java

2017-05-18 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11832:
--
Attachment: HDFS-11832.004.patch

Thanks [~10075197] for the patches! and thanks [~ajisakaa] for the comments!

Post v004 patch to add back guard to a few places where there seem to be 
potentially expensive operations. This is based on Hui Xu's v003 patch.

> Switch leftover logs to slf4j format in BlockManager.java
> -
>
> Key: HDFS-11832
> URL: https://issues.apache.org/jira/browse/HDFS-11832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0, 2.8.0, 3.0.0-alpha1
>Reporter: Hui Xu
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11832.001.patch, HDFS-11832.002.patch, 
> HDFS-11832.003.patch, HDFS-11832.004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HDFS-7706 Switch BlockManager logging to use slf4j. But the logging formats 
> were not modified appropriately. For example:
>   if (LOG.isDebugEnabled()) {
> LOG.debug("blocks = " + java.util.Arrays.asList(blocks));
>   }
> These codes should be modified to:
>   LOG.debug("blocks = {}", java.util.Arrays.asList(blocks));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11832) Switch leftover logs to slf4j format in BlockManager.java

2017-05-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017734#comment-16017734
 ] 

Chen Liang commented on HDFS-11832:
---

Seems our comments interleaved, did not see your comments when I was writing 
the one above :)

> Switch leftover logs to slf4j format in BlockManager.java
> -
>
> Key: HDFS-11832
> URL: https://issues.apache.org/jira/browse/HDFS-11832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0, 2.8.0, 3.0.0-alpha1
>Reporter: Hui Xu
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11832.001.patch, HDFS-11832.002.patch, 
> HDFS-11832.003.patch, HDFS-11832.004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HDFS-7706 Switch BlockManager logging to use slf4j. But the logging formats 
> were not modified appropriately. For example:
>   if (LOG.isDebugEnabled()) {
> LOG.debug("blocks = " + java.util.Arrays.asList(blocks));
>   }
> These codes should be modified to:
>   LOG.debug("blocks = {}", java.util.Arrays.asList(blocks));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11835) Block Storage: Overwrite of blocks fails

2017-05-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017705#comment-16017705
 ] 

Chen Liang commented on HDFS-11835:
---

I have committed this to the feature branch, thanks [~msingh] for the 
contribution!

> Block Storage: Overwrite of blocks fails
> 
>
> Key: HDFS-11835
> URL: https://issues.apache.org/jira/browse/HDFS-11835
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11835-HDFS-7240.001.patch
>
>
> Overwrite of blocks fails, because "OverWriteRequested" flag is not set 
> during chunk creation.
> {code}
> 2017-05-16 22:33:23,909 [nioEventLoopGroup-5-2] ERROR  - Rejecting write 
> chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='1_chunk, offset=0, len=
> 11933}
> {code}
> This flag needs to be set here 
> {code}
> public static void writeSmallFile(XceiverClientSpi client, String 
> containerName,
>   String key, byte[] data, String traceID) throws IOException {
> .
> ChunkInfo chunk = ChunkInfo
> .newBuilder()
> .setChunkName(key + "_chunk")
> .setOffset(0)
> .setLen(data.length)
> .build();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11835) Block Storage: Overwrite of blocks fails

2017-05-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11835:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Block Storage: Overwrite of blocks fails
> 
>
> Key: HDFS-11835
> URL: https://issues.apache.org/jira/browse/HDFS-11835
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11835-HDFS-7240.001.patch
>
>
> Overwrite of blocks fails, because "OverWriteRequested" flag is not set 
> during chunk creation.
> {code}
> 2017-05-16 22:33:23,909 [nioEventLoopGroup-5-2] ERROR  - Rejecting write 
> chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='1_chunk, offset=0, len=
> 11933}
> {code}
> This flag needs to be set here 
> {code}
> public static void writeSmallFile(XceiverClientSpi client, String 
> containerName,
>   String key, byte[] data, String traceID) throws IOException {
> .
> ChunkInfo chunk = ChunkInfo
> .newBuilder()
> .setChunkName(key + "_chunk")
> .setOffset(0)
> .setLen(data.length)
> .build();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11832) Switch leftover logs to slf4j format in BlockManager.java

2017-05-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017724#comment-16017724
 ] 

Chen Liang commented on HDFS-11832:
---

Thanks [~10075197] for the followup. But just similar to [~ajisakaa] mentioned, 
I think the variable gets evaluated regardless of the log level. There is 
[this|http://stackoverflow.com/questions/8444266/even-with-slf4j-should-you-guard-your-logging]
 discussion.

> Switch leftover logs to slf4j format in BlockManager.java
> -
>
> Key: HDFS-11832
> URL: https://issues.apache.org/jira/browse/HDFS-11832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0, 2.8.0, 3.0.0-alpha1
>Reporter: Hui Xu
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-11832.001.patch, HDFS-11832.002.patch, 
> HDFS-11832.003.patch, HDFS-11832.004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HDFS-7706 Switch BlockManager logging to use slf4j. But the logging formats 
> were not modified appropriately. For example:
>   if (LOG.isDebugEnabled()) {
> LOG.debug("blocks = " + java.util.Arrays.asList(blocks));
>   }
> These codes should be modified to:
>   LOG.debug("blocks = {}", java.util.Arrays.asList(blocks));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11803) Add -v option for du command to show header line

2017-05-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018015#comment-16018015
 ] 

Chen Liang commented on HDFS-11803:
---

thanks [~xiaobingo] for v002 patch! LGTM, +1

> Add -v option for du command to show header line
> 
>
> Key: HDFS-11803
> URL: https://issues.apache.org/jira/browse/HDFS-11803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11803.000.patch, HDFS-11803.001.patch, 
> HDFS-11803.002.patch
>
>
> Like hdfs -count command, it's better to add -v for du command to show header 
> line.
> Without -v,
> $ hdfs -du -h -s /tmp/parent
> {noformat}
> 1 G  1 G  /tmp/parent
> {noformat}
> With -v,
> $ hdfs -du -h -s -v /tmp/parent
> {noformat}
> SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME
> 1 G   1 G/tmp/parent
> {noformat}
> $ hdfs dfs -count -q -v  -h -x  /tmp/parent
> {noformat}
> QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
> FILE_COUNT   CONTENT_SIZE PATHNAME
> 10   750 G49 G21  
>   1 G /tmp/parent
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11857) Ozone : need to refactor StorageContainerLocationProtocolServerSideTranslatorPB

2017-05-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11857:
--
Attachment: HDFS-11857-HDFS-7240.001.patch

> Ozone : need to refactor 
> StorageContainerLocationProtocolServerSideTranslatorPB
> ---
>
> Key: HDFS-11857
> URL: https://issues.apache.org/jira/browse/HDFS-11857
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11857-HDFS-7240.001.patch
>
>
> Currently, StorageContainerLocationProtocolServerSideTranslatorPB has two 
> protocol impls:
> {{StorageContainerLocationProtocol impl}}
> {{ScmBlockLocationProtocol blockImpl}}.
> the class provides container-related services by invoking {{impl}}, and 
> block-related services by invoking {{blockImpl}}. Namely, on server side, the 
> implementation makes a distinguish between "container protocol" and "block 
> protocol". 
> An issue is that, currently, everywhere except for the server side is viewing 
> "container protocol" and "block protocol" as different. More specifically, 
> StorageContainerLocationProtocol.proto still includes both container 
> operation and block operation in itself alone. As a result of this 
> difference, it is difficult to implement certain APIs  (e.g. putKey) properly 
> from client side.
> This JIRA merges "block protocol" back to "container protocol" in 
> StorageContainerLocationProtocolServerSideTranslatorPB, to unblock the 
> implementation of other APIs for client side. 
> Please note that, in the long run, separating these two protocols does seem 
> to be the right way. This JIRA is only a temporary solution to unblock 
> developing other APIs. Will need to revisit these protocols in the future.
> Thanks [~xyao] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11857) Ozone : need to refactor StorageContainerLocationProtocolServerSideTranslatorPB

2017-05-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11857:
-

 Summary: Ozone : need to refactor 
StorageContainerLocationProtocolServerSideTranslatorPB
 Key: HDFS-11857
 URL: https://issues.apache.org/jira/browse/HDFS-11857
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


Currently, StorageContainerLocationProtocolServerSideTranslatorPB has two 
protocol impls:
{{StorageContainerLocationProtocol impl}}
{{ScmBlockLocationProtocol blockImpl}}.
the class provides container-related services by invoking {{impl}}, and 
block-related services by invoking {{blockImpl}}. Namely, on server side, the 
implementation makes a distinguish between "container protocol" and "block 
protocol". 

An issue is that, currently, everywhere except for the server side is viewing 
"container protocol" and "block protocol" as different. More specifically, 
StorageContainerLocationProtocol.proto still includes both container operation 
and block operation in itself alone. As a result of this difference, it is 
difficult to implement certain APIs  (e.g. putKey) properly from client side.

This JIRA merges "block protocol" back to "container protocol" in 
StorageContainerLocationProtocolServerSideTranslatorPB, to unblock the 
implementation of other APIs for client side. 

Please note that, in the long run, separating these two protocols does seem to 
be the right way. This JIRA is only a temporary solution to unblock developing 
other APIs. Will need to revisit these protocols in the future.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11857) Ozone : need to refactor StorageContainerLocationProtocolServerSideTranslatorPB

2017-05-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11857:
--
Status: Patch Available  (was: Open)

> Ozone : need to refactor 
> StorageContainerLocationProtocolServerSideTranslatorPB
> ---
>
> Key: HDFS-11857
> URL: https://issues.apache.org/jira/browse/HDFS-11857
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11857-HDFS-7240.001.patch
>
>
> Currently, StorageContainerLocationProtocolServerSideTranslatorPB has two 
> protocol impls:
> {{StorageContainerLocationProtocol impl}}
> {{ScmBlockLocationProtocol blockImpl}}.
> the class provides container-related services by invoking {{impl}}, and 
> block-related services by invoking {{blockImpl}}. Namely, on server side, the 
> implementation makes a distinguish between "container protocol" and "block 
> protocol". 
> An issue is that, currently, everywhere except for the server side is viewing 
> "container protocol" and "block protocol" as different. More specifically, 
> StorageContainerLocationProtocol.proto still includes both container 
> operation and block operation in itself alone. As a result of this 
> difference, it is difficult to implement certain APIs  (e.g. putKey) properly 
> from client side.
> This JIRA merges "block protocol" back to "container protocol" in 
> StorageContainerLocationProtocolServerSideTranslatorPB, to unblock the 
> implementation of other APIs for client side. 
> Please note that, in the long run, separating these two protocols does seem 
> to be the right way. This JIRA is only a temporary solution to unblock 
> developing other APIs. Will need to revisit these protocols in the future.
> Thanks [~xyao] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    3   4   5   6   7   8   9   10   11   12   >