[jira] [Work started] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-08-30 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13868 started by Pranay Singh.
---
> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-08-30 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-13868:

Attachment: HDFS-13868.001.patch

> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-30 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi reassigned HDDS-337:
---

Assignee: Dinesh Chitlangia  (was: Nilotpal Nandi)

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, creating it
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:18:14 INFO ConfUtils:41 - 

[jira] [Commented] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-30 Thread Nilotpal Nandi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598248#comment-16598248
 ] 

Nilotpal Nandi commented on HDDS-337:
-

[~dineshchitlangia] - assigning it to you.

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, creating it
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:18:14 INFO 

[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598222#comment-16598222
 ] 

genericqa commented on HDDS-98:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 4 
fixed = 0 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
32s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | 

[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-08-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598218#comment-16598218
 ] 

genericqa commented on HDDS-351:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 2 new + 15 unchanged - 
0 fixed = 17 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  4s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.client.rest.TestOzoneRestClient |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HDFS-13815) RBF: Add check to order command

2018-08-30 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598193#comment-16598193
 ] 

Yiqun Lin commented on HDFS-13815:
--

Thanks [~RANith] for addressing the [~brahmareddy]'s comment. +1 pending 
Jenkins.
Upload the same patch to re-trigger Jenkins.

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13815) RBF: Add check to order command

2018-08-30 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13815:
-
Attachment: HDFS-13815-007.patch

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13857) RBF: Choose to enable the default nameservice to write files.

2018-08-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598191#comment-16598191
 ] 

genericqa commented on HDFS-13857:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
45s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13857 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937717/HDFS-13857.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 3841568fd628 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8aa6c4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24908/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24908/testReport/ |
| Max. process+thread count | 957 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-13885) Improve debugging experience of dfsclient decrypts

2018-08-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598178#comment-16598178
 ] 

genericqa commented on HDFS-13885:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937769/HDFS-13885.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfc4bc98f020 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8aa6c4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24909/testReport/ |
| Max. process+thread count | 441 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24909/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve debugging 

[jira] [Commented] (HDFS-13854) RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms unit.

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598137#comment-16598137
 ] 

Hudson commented on HDFS-13854:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDFS-13854. RBF: The ProcessingAvgTime and ProxyAvgTime should display (brahma: 
rev 64ad0298d441559951bc9589a40f8aab17c93a5f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java


> RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms 
> unit.
> ---
>
> Key: HDFS-13854
> URL: https://issues.apache.org/jira/browse/HDFS-13854
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-13854.001.patch, HDFS-13854.002.patch, 
> ganglia_jmx_compare1.jpg, ganglia_jmx_compare2.jpg
>
>
> In the FederationRPCMetrics, proxy time and processing time should be exposed 
> to the jmx or ganglia with ms units. Although the method toMS() exists, we 
> cannot get the correct proxy time and processing time by jmx and ganglia.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-280) Support ozone dist-start-stitching on openbsd/osx

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598142#comment-16598142
 ] 

Hudson commented on HDDS-280:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDDS-280. Support ozone dist-start-stitching on openbsd/osx. Contributed 
(msingh: rev 692736f7cfb72b8932dc2eb4f4faa995dc6521f8)
* (edit) hadoop-ozone/common/pom.xml
* (edit) dev-support/bin/ozone-dist-layout-stitching
* (edit) hadoop-ozone/acceptance-test/pom.xml
* (edit) hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
* (edit) hadoop-ozone/acceptance-test/dev-support/bin/robot.sh
* (edit) hadoop-ozone/docs/content/GettingStarted.md
* (edit) hadoop-ozone/acceptance-test/src/test/acceptance/commonlib.robot
* (edit) hadoop-ozone/pom.xml
* (edit) 
hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
* (edit) dev-support/bin/ozone-dist-tar-stitching
* (edit) hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh


> Support ozone dist-start-stitching on openbsd/osx
> -
>
> Key: HDDS-280
> URL: https://issues.apache.org/jira/browse/HDDS-280
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-280.001.patch, HDDS-280.002.patch, 
> HDDS-280.003.patch
>
>
> {quote}Ozone is creating a symlink during the dist process.
> Using the "ozone" directory as a destination name all the docker-based 
> acceptance tests and docker-compose files are more simple as they don't need 
> to have the version information in the path.
> But to keep the version specific folder name in the tar file we create a 
> symbolic link during the tar creation. With the symbolic link and the 
> '–dereference' tar argument we can create the tar file which includes a 
> versioned directory (ozone-0.2.1) but we can use the a dist directory without 
> the version in the name (hadoop-dist/target/ozone).
> {quote}
> This is the description of the current 
> dev-support/bin/ozone-dist-tar-stitching. [~aw] in a comment for HDDS-276 
> pointed to the problem that some bsd variants don't support the dereference 
> command line options of the ln command.
> The main reason to use this approach is to get a simplified destination name 
> without the version (hadoop-dist/target/ozone instead of 
> hadoop-dist/target/ozone-0.2.1). It simplifies the docker-compose based 
> environments and acceptance tests, therefore I prefer to keep the simplified 
> destination name.
> The issue is the tar file creation, if and only if we need the version number 
> in the name of the root directory inside of the tar.
> Possible solutions:
>  # Use cp target/ozone target/ozone-0.2.1 + tar. It's simple but more slow 
> and requires more space.
>  # Do the tar distribution from docker all the time in case of 'dereference' 
> is not supported. Not very convenient
>  # Accept that tar will contain ozone directory and not ozone-0.2.1. This is 
> the more simple and can be improved with an additional VERSION file in the 
> root of the distribution.
>  # (+1) Use hadoop-dist/target/ozone-0.2.1 instead of 
> hadoop-dist/target/ozone. This is more complex for the docker based testing 
> as we need the explicit names in the compose files (volume: 
> ../../../hadoop-dist/target/ozone-0.2.1). The structure is more complex with 
> using version in the directory name.
> Please comment your preference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13863) FsDatasetImpl should log DiskOutOfSpaceException

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598144#comment-16598144
 ] 

Hudson commented on HDFS-13863:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDFS-13863. FsDatasetImpl should log DiskOutOfSpaceException. (yqlin: rev 
582cb10ec74ed5666946a3769002ceb80ba660cb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> FsDatasetImpl should log DiskOutOfSpaceException
> 
>
> Key: HDFS-13863
> URL: https://issues.apache.org/jira/browse/HDFS-13863
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13863.001.patch, HDFS-13863.002.patch, 
> HDFS-13863.003.patch
>
>
> The code in function *createRbw* as follow
> {code:java}
> try {
>   // First try to place the block on a transient volume.
>   ref = volumes.getNextTransientVolume(b.getNumBytes());
>   datanode.getMetrics().incrRamDiskBlocksWrite();
> } catch (DiskOutOfSpaceException de) {
>   // Ignore the exception since we just fall back to persistent 
> storage.
> } finally {
>   if (ref == null) {
> cacheManager.release(b.getNumBytes());
>   }
> }
> {code}
> I think we should log the exception because it took me long time to resolve 
> problems, and maybe others face the same problems.
> When i test ram_disk, i found no data was written into randomdisk. I debug, 
> deep into the source code, and found that randomdisk size was less than 
> reserved space. I think if message was logged, i would resolve the problem 
> quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598140#comment-16598140
 ] 

Hudson commented on HDFS-13634:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDFS-13634. RBF: Configurable value in xml for async connection request (yqlin: 
rev a0ebb6b39f2932d3ea2fb5e287f52b841e108428)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml


> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.2
>
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch, 
> HDFS-13634.2.patch, HDFS-13634.3.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-380) Remove synchronization from ChunkGroupOutputStream and ChunkOutputStream

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598139#comment-16598139
 ] 

Hudson commented on HDDS-380:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDDS-380. Remove synchronization from ChunkGroupOutputStream and (nanda: rev 
0bd4217194ae50ec30e386b200fcfa54c069f042)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java


> Remove synchronization from ChunkGroupOutputStream and ChunkOutputStream
> 
>
> Key: HDDS-380
> URL: https://issues.apache.org/jira/browse/HDDS-380
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-380.00.patch
>
>
> In one of the code review for HDDS-247, it was suggested  that 
> ChunkGroupOutputStream/ChunkOutputStream may not be thread safe as the Java 
> OutputStream subclasses are generally non-thread-safe for performance 
> reasons. If users want thread safe, they can easily synchronize it 
> themselves. This Jira aims to remove synchronization from 
> ChunkGroupOutputStream and ChunkOutputStream.
> cc [~szetszwo].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13027) Handle possible NPEs due to deleted blocks in race condition

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598150#comment-16598150
 ] 

Hudson commented on HDFS-13027:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDFS-13027. Handle possible NPEs due to deleted blocks in race (vinayakumarb: 
rev c36d69a7b30927eaea16335e06cfcc247accde35)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> Handle possible NPEs due to deleted blocks in race condition
> 
>
> Key: HDFS-13027
> URL: https://issues.apache.org/jira/browse/HDFS-13027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13027-01.patch
>
>
> Since File deletions and Block removal from BlocksMap done in separate locks, 
> there are possibilities of NPE due to calls of 
> {{blockManager.getBlockCollection(block)}} returning null.
> Handle all possibilities of NPEs due to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-365) Implement flushStateMachineData for containerStateMachine

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598136#comment-16598136
 ] 

Hudson commented on HDDS-365:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
HDDS-365. Implement flushStateMachineData for containerStateMachine. (msingh: 
rev 2651e2c43d0825912669a87afc256bad9f1ea6ed)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) hadoop-project/pom.xml


> Implement flushStateMachineData for containerStateMachine
> -
>
> Key: HDDS-365
> URL: https://issues.apache.org/jira/browse/HDDS-365
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-365.00.patch, HDDS-365.01.patch
>
>
> With RATIS-295 , a new stateMachine API called flushStateMachineData has been 
> introduced. This API needs to be implemented in ContainerStateMachine so as 
> when actual flush happens via Ratis for the actual log file, the 
> corresponding stateMachineData should also get flushed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598151#comment-16598151
 ] 

Hudson commented on HDFS-13838:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14855/])
Revert "HDFS-13838. WebHdfsFileSystem.getFileStatus() won't return (weichiu: 
rev 8aa6c4f079fd38a3230bc070c2ce837fefbc5301)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java


> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-30 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598127#comment-16598127
 ] 

Konstantin Shvachko commented on HDFS-13880:


Hey Chen, what "Masync" stands for?

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598123#comment-16598123
 ] 

Konstantin Shvachko commented on HDFS-13779:


Erik, this looks good. Two minor nits from my IDE
# Unused import of Constructor in ORPP
# {{TestObserverReadProxyProvider.proxyFactoryInvocations}} is used for 
incrementing it. You might want to get rid of it, or check its value somewhere.

I recommend committing this without waiting for Jenkins to unblock other jiras. 
We should fix warnings if there will be any later on.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.002.patch, 
> HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598084#comment-16598084
 ] 

Siyao Meng commented on HDFS-13886:
---

Thanks [~jojochuang] for the comment. Added code to close fs at the end of the 
test in rev 002.

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13886:
--
Attachment: HDFS-13886.002.patch
Status: Patch Available  (was: In Progress)

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13886:
--
Status: In Progress  (was: Patch Available)

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598077#comment-16598077
 ] 

Wei-Chiu Chuang commented on HDFS-13886:


Really good catch. Thanks [~smeng]

One minor issue: please close FileSystem objects at the end of the test. Other 
than that the patch looks good.

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13888) RequestHedgingProxyProvider shows InterruptedException

2018-08-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598059#comment-16598059
 ] 

Íñigo Goiri commented on HDFS-13888:


[~msingh], [~LiJinglun], you guys were the last ones working on this.
Do you mind taking a look?

> RequestHedgingProxyProvider shows InterruptedException
> --
>
> Key: HDFS-13888
> URL: https://issues.apache.org/jira/browse/HDFS-13888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Íñigo Goiri
>Priority: Minor
>
> RequestHedgingProxyProvider shows InterruptedException when running:
> {code}
> 2018-08-30 23:52:48,883 WARN ipc.Client: interrupted waiting to send rpc 
> request to server
> java.lang.InterruptedException
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
> at java.util.concurrent.FutureTask.get(FutureTask.java:191)
> at 
> org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1142)
> at org.apache.hadoop.ipc.Client.call(Client.java:1395)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:135)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> It looks like this is the case of the background request that is killed once 
> the main one succeeds. We should not log the full stack trace for this and 
> maybe just a debug log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13888) RequestHedgingProxyProvider shows InterruptedException

2018-08-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598056#comment-16598056
 ] 

Íñigo Goiri commented on HDFS-13888:


Found this in 3.1.1, which has the latest code in trunk including HDFS-13388.

> RequestHedgingProxyProvider shows InterruptedException
> --
>
> Key: HDFS-13888
> URL: https://issues.apache.org/jira/browse/HDFS-13888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Íñigo Goiri
>Priority: Minor
>
> RequestHedgingProxyProvider shows InterruptedException when running:
> {code}
> 2018-08-30 23:52:48,883 WARN ipc.Client: interrupted waiting to send rpc 
> request to server
> java.lang.InterruptedException
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
> at java.util.concurrent.FutureTask.get(FutureTask.java:191)
> at 
> org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1142)
> at org.apache.hadoop.ipc.Client.call(Client.java:1395)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:135)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> It looks like this is the case of the background request that is killed once 
> the main one succeeds. We should not log the full stack trace for this and 
> maybe just a debug log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13888) RequestHedgingProxyProvider shows InterruptedException

2018-08-30 Thread JIRA
Íñigo Goiri created HDFS-13888:
--

 Summary: RequestHedgingProxyProvider shows InterruptedException
 Key: HDFS-13888
 URL: https://issues.apache.org/jira/browse/HDFS-13888
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri


RequestHedgingProxyProvider shows InterruptedException when running:
{code}
2018-08-30 23:52:48,883 WARN ipc.Client: interrupted waiting to send rpc 
request to server
java.lang.InterruptedException
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1142)
at org.apache.hadoop.ipc.Client.call(Client.java:1395)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:135)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}

It looks like this is the case of the background request that is killed once 
the main one succeeds. We should not log the full stack trace for this and 
maybe just a debug log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13888) RequestHedgingProxyProvider shows InterruptedException

2018-08-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13888:
---
Affects Version/s: 3.1.1

> RequestHedgingProxyProvider shows InterruptedException
> --
>
> Key: HDFS-13888
> URL: https://issues.apache.org/jira/browse/HDFS-13888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Íñigo Goiri
>Priority: Minor
>
> RequestHedgingProxyProvider shows InterruptedException when running:
> {code}
> 2018-08-30 23:52:48,883 WARN ipc.Client: interrupted waiting to send rpc 
> request to server
> java.lang.InterruptedException
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
> at java.util.concurrent.FutureTask.get(FutureTask.java:191)
> at 
> org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1142)
> at org.apache.hadoop.ipc.Client.call(Client.java:1395)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:135)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> It looks like this is the case of the background request that is killed once 
> the main one succeeds. We should not log the full stack trace for this and 
> maybe just a debug log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-387:
--
Fix Version/s: 0.2.1

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-387.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598054#comment-16598054
 ] 

Anu Engineer commented on HDDS-387:
---

[~msingh] Could you please take a look at this patch when you get a chance ? 

Thanks

 

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-387.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-387:
--
Attachment: (was: HDFS-13887.001.patch)

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-387.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-387:
--
Attachment: HDDS-387.001.patch
Status: Patch Available  (was: Open)

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-387.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598051#comment-16598051
 ] 

Anu Engineer commented on HDDS-387:
---

Moved to HDDS Jira instead of HDFS. We will need to rename the patch once the 
Jenkins is back.

 

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDFS-13887.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-387:
-

Assignee: Namit Maheshwari

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDFS-13887.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598050#comment-16598050
 ] 

Erik Krogen commented on HDFS-13779:


Looks like me and Konstantin hit a race condition :) Comments 2 and 4 were 
fixed in v001 patch.

I just attached v002 which addresses 1 and 3. Konstantin and I discussed 
offline and realized that the caching done in {{ipc.Client}} will make the two 
ProxyProviders share a single connection per NameNode regardless of the caching 
of the proxies themselves, so I agree that the {{CachingHAProxyFactory}} is not 
necessary. This also makes my changes to the constructor (accepting a class and 
using reflection, instead of directly accepting an instance) unnecessary. 

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.002.patch, 
> HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13779:
---
Attachment: HDFS-13779-HDFS-12943.002.patch

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.002.patch, 
> HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-387:
-

Assignee: (was: Namit Maheshwari)
Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Key: HDDS-387  (was: HDFS-13887)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: HDFS-13887.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen reassigned HDFS-13872:
--

Assignee: Erik Krogen

> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13872-HDFS-12943.000.patch
>
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13872 started by Erik Krogen.
--
> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13872-HDFS-12943.000.patch
>
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13872:
---
Attachment: HDFS-13872-HDFS-12943.000.patch

> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Priority: Major
> Attachments: HDFS-13872-HDFS-12943.000.patch
>
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598047#comment-16598047
 ] 

Erik Krogen commented on HDFS-13872:


I like the idea of marking only certain methods. I've attached a v000 patch 
which only adds the {{AlignmentContext}} if the provided method is annotated 
with a new annotation, {{AlignmentContext.NeedsAlignment}}. This can't be a 
part of {{ReadOnly}} since that is HDFS-specific and {{AlignmentContext}} is 
generic.

v000 patch doesn't contain tests or actually annotate any methods with this 
yet, just posting for comments.

> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Priority: Major
> Attachments: HDFS-13872-HDFS-12943.000.patch
>
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13887) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDFS-13887:

Attachment: HDFS-13887.001.patch

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDFS-13887
> URL: https://issues.apache.org/jira/browse/HDFS-13887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDFS-13887.001.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13887) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDFS-13887:
---

Assignee: Namit Maheshwari

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDFS-13887
> URL: https://issues.apache.org/jira/browse/HDFS-13887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13887) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDFS-13887:
---

 Summary: Remove hadoop-ozone-filesystem dependency on 
hadoop-ozone-integration-test
 Key: HDFS-13887
 URL: https://issues.apache.org/jira/browse/HDFS-13887
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Namit Maheshwari


hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test

Ideally filesystem modules should not have dependency on test modules.

This will also have issues while developing Unit Tests and trying to 
instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598003#comment-16598003
 ] 

Konstantin Shvachko commented on HDFS-13749:


Hey [~csun], this should probably go on top of Erik's changes in HDFS-13779. I 
believe with that you will need only to change the implementation of 
{{getServiceState()}}. Team work :-)

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13872:
---
Summary: Only some protocol methods should perform msync wait  (was: Only 
ClientProtocol should perform msync wait)

> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Priority: Major
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-263) Add retries in Ozone Client to handle BLOCK_NOT_COMMITTED Exception

2018-08-30 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597982#comment-16597982
 ] 

Tsz Wo Nicholas Sze commented on HDDS-263:
--

+1 the 04 patch looks good.

> Add retries in Ozone Client to handle BLOCK_NOT_COMMITTED Exception
> ---
>
> Key: HDDS-263
> URL: https://issues.apache.org/jira/browse/HDDS-263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-263.00.patch, HDDS-263.01.patch, HDDS-263.02.patch, 
> HDDS-263.03.patch, HDDS-263.04.patch
>
>
> While Ozone client writes are going on, a container on a datanode can gets 
> closed because of node failures, disk out of space etc. In situations as 
> such, client write will fail with CLOSED_CONTAINER_IO. In this case, ozone 
> client should try to get the committed block length for the pending open 
> blocks and update the OzoneManager. While trying to get the committed block 
> length, it may fail with BLOCK_NOT_COMMITTED exception because as a part of 
> transiton from CLOSING to CLOSED state for the container , it commits all 
> open blocks one by one. In such cases, client needs to retry to get the 
> committed block length for a fixed no of attempts and eventually throw the 
> exception to the application if its not able to successfully get and update 
> the length in the OzoneManager. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-30 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597971#comment-16597971
 ] 

Chen Liang commented on HDFS-13880:
---

Post v002 patch with a minor optimization. [~shv] would you mind taking a look? 
[~csun] I think this might be useful for HDFS-13749 

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-30 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13880:
--
Attachment: HDFS-13880-HDFS-12943.002.patch

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597970#comment-16597970
 ] 

Anu Engineer commented on HDDS-379:
---

bq. At cost of redundancy, i will re-iterate i am ok if you think this should 
be reverted
yes, I think this should be reverted. Thanks


> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597968#comment-16597968
 ] 

Ajay Kumar commented on HDDS-379:
-

AFAIK wikipedia doesn't follow camlecase, neither does golang, youdictionary or 
python. A quick search in git for SubCommand in javascript does show instances 
where it is camel cased.

{quote}I did look at these and I think the general convention – these days 
seems to favor subcommand{quote}
I will politely disagree :). 
{quote}It is almost like subcommand is becoming an accepted word.{quote}
Again this points more towards a subjective understanding. 
At cost of redundancy, i will re-iterate i am ok if you think this should be 
reverted.

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-30 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597967#comment-16597967
 ] 

Chen Liang commented on HDFS-13880:
---

Post v001 patch. The basic approach is that, extending {{ReadOnly}} annotation 
to have one more method {{isMasync}}. If a method in {{ClientProtocol}} is 
annotated with {{isMasync = true}}, the sever will not checking for it's state 
id. The call will be processed regardless of server state. Calling this Masync 
to be consistent with Msync and differing from general notion of async.

I was exploring different ways but found this actually tricky to do. The 
current way server checks this in v001 patch is based on method name: server 
checks method name in the call rpc header, if there is a method of the same 
name found in {{ClientProtocol}}, the annotation of this method in 
{{ClientProtocol}} will be checked. This is done in {{GlobalStateIdContext}}.

Also, in current patch, two methods  {{getStats}} and {{getContentSummary}} are 
annotated with {{isMasync = true}}.

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-30 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13880:
--
Attachment: HDFS-13880-HDFS-12943.001.patch

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597916#comment-16597916
 ] 

Erik Krogen edited comment on HDFS-13749 at 8/30/18 9:51 PM:
-

I think the idea is sound.

-I hate to be unable to re-use the connections we already create to the 
NameNodes, but it seems there isn't another way around this since everything in 
the class is {{ClientProtocol}}-
edit: I learned that the connections are cached at the {{ipc.Client}} layer, so 
the connection should be shared despite it being a different protocol.

I also find it a little weird to be explicitly creating the PB translator; 
shouldn't we be using {{NameNodeProxies#createNonHAProxy()}}? We also shouldn't 
create a new one every time we do a state fetch; they should be cached (maybe 
in a subclass of {{NNProxyInfo}})


was (Author: xkrogen):
I think the idea is sound. I hate to be unable to re-use the connections we 
already create to the NameNodes, but it seems there isn't another way around 
this since everything in the class is {{ClientProtocol}}... I also find it a 
little weird to be explicitly creating the PB translator; shouldn't we be using 
{{NameNodeProxies#createNonHAProxy()}}? We also need to make sure we don't 
create a new one every time we do a state fetch; they should be cached (maybe 
in a subclass of {{NNProxyInfo}})

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597949#comment-16597949
 ] 

Anu Engineer commented on HDDS-379:
---

I do agree that there seems to be a case where some people have used in as if 
there are separate words.

But the general usage pattern seems to be -- subcommand.
https://en.wiktionary.org/wiki/subcommand

>From golang usage
https://github.com/google/subcommands

>From Javascript
https://www.npmjs.com/package/subcommand

some more dict links
http://www.yourdictionary.com/subcommand

There are some usage cases like python -- which in the same page use both 
sub-commands and subcommands.
https://docs.python.org/3/library/argparse.html

I did look at these and I think the general convention -- these days seems to 
favor subcommand, instead of sub-command or subCommand. It is almost like 
subcommand is becoming an accepted word.






> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597941#comment-16597941
 ] 

Ajay Kumar edited comment on HDDS-379 at 8/30/18 9:33 PM:
--

This is what quick search about word convention shows:
{quote}"a way in which something is usually done, especially within a 
particular area or activity."{quote}
First example about input/output falls in that category but i don't think so 
about the subcommand. A quick search into git will give you many examples of 
subcommand being camel-cased. 

https://github.com/kohsuke/args4j/blob/master/args4j/src/org/kohsuke/args4j/spi/SubCommand.java
 
https://github.com/AndoxynPlugins/SubCommandPluginExample


was (Author: ajayydv):
This is what quick search about word convention shows:
{quote}"a way in which something is usually done, especially within a 
particular area or activity."{quote}
In example you gave first example about input/output falls in that category but 
i don;t think so about the subcommand. I quick search into git will give you 
many examples of subcommand being camel-cased. 

https://github.com/kohsuke/args4j/blob/master/args4j/src/org/kohsuke/args4j/spi/SubCommand.java
 
https://github.com/AndoxynPlugins/SubCommandPluginExample

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597941#comment-16597941
 ] 

Ajay Kumar edited comment on HDDS-379 at 8/30/18 9:32 PM:
--

This is what quick search about word convention shows:
{quote}"a way in which something is usually done, especially within a 
particular area or activity."{quote}
In example you gave first example about input/output falls in that category but 
i don;t think so about the subcommand. I quick search into git will give you 
many examples of subcommand being camel-cased. 

https://github.com/kohsuke/args4j/blob/master/args4j/src/org/kohsuke/args4j/spi/SubCommand.java
 
https://github.com/AndoxynPlugins/SubCommandPluginExample


was (Author: ajayydv):
This is what quick search about word convention done shows:
{quote}"a way in which something is usually done, especially within a 
particular area or activity."{quote}
In example you gave first example about input/output falls in that category but 
i don;t think so about the subcommand. I quick search into git will give you 
many examples of subcommand being camel-cased. 

https://github.com/kohsuke/args4j/blob/master/args4j/src/org/kohsuke/args4j/spi/SubCommand.java
 
https://github.com/AndoxynPlugins/SubCommandPluginExample

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597941#comment-16597941
 ] 

Ajay Kumar commented on HDDS-379:
-

This is what quick search about word convention done shows:
{quote}"a way in which something is usually done, especially within a 
particular area or activity."{quote}
In example you gave first example about input/output falls in that category but 
i don;t think so about the subcommand. I quick search into git will give you 
many examples of subcommand being camel-cased. 

https://github.com/kohsuke/args4j/blob/master/args4j/src/org/kohsuke/args4j/spi/SubCommand.java
 
https://github.com/AndoxynPlugins/SubCommandPluginExample

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597933#comment-16597933
 ] 

Anu Engineer commented on HDDS-379:
---

{quote}End of the day as i said this is little subjective
{quote}
I hope you mean convention, instead of subjective. These compound words are 
written in a certain convention. If you look at the words in the compound list 
link, you will find words like input and output. By convention they are written 
as input and output. If someone started writing them in camel as {{inPut}} and 
{{outPut}}, then it is surprising. The reason is that input and output are 
well-established conventions. I was looking around for the word subcommand, it 
seemed to me the usage in the computer terminology tends towards subcommand as 
a single word. With no space and hyphenation; hence my suggestion that the 
initial usage of the  {{CreateSubcommand}} seems correct, or is in line with 
the general conventions.

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597924#comment-16597924
 ] 

Ajay Kumar commented on HDDS-379:
-

[~anu] 
{quote}unfortunately just because a word is a compound, we cannot camelcase 
them{quote}
Similarly just because a word is compound it doesn't mean it shouldn't be camel 
cased. End of the day as i said this is little subjective . Personally i think 
its better in camelcase but i am totally ok if you wanna revert it. 

+1 (non binding) 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597916#comment-16597916
 ] 

Erik Krogen commented on HDFS-13749:


I think the idea is sound. I hate to be unable to re-use the connections we 
already create to the NameNodes, but it seems there isn't another way around 
this since everything in the class is {{ClientProtocol}}... I also find it a 
little weird to be explicitly creating the PB translator; shouldn't we be using 
{{NameNodeProxies#createNonHAProxy()}}? We also need to make sure we don't 
create a new one every time we do a state fetch; they should be cached (maybe 
in a subclass of {{NNProxyInfo}})

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597886#comment-16597886
 ] 

Anu Engineer commented on HDDS-379:
---

unfortunately just because a word is a compound, we cannot camelcase them. 
There are several examples here. 

[https://www.ef.edu/english-resources/english-grammar/compound-nouns/]

Many of them would look odd if we camel-cased them.

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597819#comment-16597819
 ] 

Ajay Kumar edited comment on HDDS-379 at 8/30/18 8:12 PM:
--

Not a language expert either so this looks like subjective suggestion. I don't 
feel strongly about it but its a compound word. 


was (Author: ajayydv):
Not a language expert either so this looks like subjective suggestion. I don't 
feel strongly about it but it its a compound word. 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13886:
--
Attachment: HDFS-13886.001.patch
Status: Patch Available  (was: Open)

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597855#comment-16597855
 ] 

Konstantin Shvachko commented on HDFS-13779:


Looks good, Erik. Two comments from me:
# In ORPP you use raw types: "References to generic type 
AbstractNNFailoverProxyProvider should be parameterized"
# Unused imports (2) in {{TestObserverReadProxyProvider}}.
# For {{CachingHAProxyFactory}} I don't think we achieve much here. It still 
stores the the Proxy twice - one in the {{backingFactory}} and another in the 
cache. If want to cache proxies we should do it in the RPC engine globally for 
all proxies. I suggest we skip this optimization for now.
# Also {{TestObserverNode}} fails for me with your patch. I can debug if it is 
only on my box.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597855#comment-16597855
 ] 

Konstantin Shvachko edited comment on HDFS-13779 at 8/30/18 7:53 PM:
-

Looks good, Erik. Few comments from me:
# In ORPP you use raw types: "References to generic type 
AbstractNNFailoverProxyProvider should be parameterized"
# Unused imports (2) in {{TestObserverReadProxyProvider}}.
# For {{CachingHAProxyFactory}} I don't think we achieve much here. It still 
stores the the Proxy twice - one in the {{backingFactory}} and another in the 
cache. If want to cache proxies we should do it in the RPC engine globally for 
all proxies. I suggest we skip this optimization for now.
# Also {{TestObserverNode}} fails for me with your patch. I can debug if it is 
only on my box.


was (Author: shv):
Looks good, Erik. Two comments from me:
# In ORPP you use raw types: "References to generic type 
AbstractNNFailoverProxyProvider should be parameterized"
# Unused imports (2) in {{TestObserverReadProxyProvider}}.
# For {{CachingHAProxyFactory}} I don't think we achieve much here. It still 
stores the the Proxy twice - one in the {{backingFactory}} and another in the 
cache. If want to cache proxies we should do it in the RPC engine globally for 
all proxies. I suggest we skip this optimization for now.
# Also {{TestObserverNode}} fails for me with your patch. I can debug if it is 
only on my box.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13885) Improve debugging experience of dfsclient decrypts

2018-08-30 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597834#comment-16597834
 ] 

Xiao Chen commented on HDFS-13885:
--

Thanks [~knanasi] for the improvement.

This would be helpful, and we can know from debug logs how long each decrypt 
call to the KMS took. Only suggestion is I think logging the stream id as hex 
seems more intuitive. (e.g. {{Integer.toHexString}}) +1 pending

> Improve debugging experience of dfsclient decrypts
> --
>
> Key: HDFS-13885
> URL: https://issues.apache.org/jira/browse/HDFS-13885
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13885.001.patch
>
>
> We want to know from the hdfs client log (e.g. hbase RS logs) for each 
> CryptoOutputstream, approximately when does the decrypt happen and when does 
> the file read happen, to help us rule out or identify hdfs NN / kms / DN 
> being slow.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13886:
-

 Summary: HttpFSFileSystem.getFileStatus() doesn't return "snapshot 
enabled" bit
 Key: HDFS-13886
 URL: https://issues.apache.org/jira/browse/HDFS-13886
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 3.0.3, 3.1.1
Reporter: Siyao Meng
Assignee: Siyao Meng


FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests in 
BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Description: 
"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following lines 
indicated by prepending "+":

{code:java}
// allow snapshots on /bar using webhdfs
webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
{code} 
The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.


Update:

A further investigation shows that FSOperations.toJsonInner() also doesn't 
check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
addressed in a separate jira HDFS-13886.

  was:
"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following lines 
indicated by prepending "+":

{code:java}
// allow snapshots on /bar using webhdfs
webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
{code} 
The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.


Update:

A further investigation shows that FSOperations.toJsonInner() also doesn't 
check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests in 
BaseTestHttpFSWith will be added to prevent this from happening.


> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Attachment: HDFS-13838.002.patch
Status: Patch Available  (was: Open)

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.0.3, 3.1.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests 
> in BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Attachment: (was: HDFS-13838.002.patch)

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests 
> in BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Attachment: HDFS-13838.002.patch

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests 
> in BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597827#comment-16597827
 ] 

Erik Krogen edited comment on HDFS-13779 at 8/30/18 7:34 PM:
-

Just attached v001 patch addressing the review comments as above, filling out 
the remaining Javadoc, fixing TestObserverNode, and a few other minor fixes. 
Should be ready to go now pending any other review comments.


was (Author: xkrogen):
Just attached v001 patch addressing the review comments as above, filling out 
the remaining Javadoc, fixing TestObserverNode, and a few other minor fixes.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597827#comment-16597827
 ] 

Erik Krogen commented on HDFS-13779:


Just attached v001 patch addressing the review comments as above, filling out 
the remaining Javadoc, fixing TestObserverNode, and a few other minor fixes.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13779:
---
Attachment: HDFS-13779-HDFS-12943.001.patch

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Description: 
"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following lines 
indicated by prepending "+":

{code:java}
// allow snapshots on /bar using webhdfs
webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
 
The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.


Update:

A further investigation shows that FSOperations.toJsonInner() also doesn't 
check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests in 
BaseTestHttpFSWith will be added to prevent this from happening.

  was:
"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following lines 
indicated by prepending "+":

 
{code:java}
// allow snapshots on /bar using webhdfs
webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
 

The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.


> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests 
> in BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Description: 
"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following lines 
indicated by prepending "+":

{code:java}
// allow snapshots on /bar using webhdfs
webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
{code} 
The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.


Update:

A further investigation shows that FSOperations.toJsonInner() also doesn't 
check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests in 
BaseTestHttpFSWith will be added to prevent this from happening.

  was:
"Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].

However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
won't return the correct "snapshot enabled" status. The reason is that 
JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
flag to the resulting HdfsFileStatus object.

Proof:

In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following lines 
indicated by prepending "+":

{code:java}
// allow snapshots on /bar using webhdfs
webHdfs.allowSnapshot(bar);
+// check if snapshot status is enabled
+assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
+assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
 
The first assertion will pass, as expected, while the second assertion will 
fail because of the reason above.


Update:

A further investigation shows that FSOperations.toJsonInner() also doesn't 
check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests in 
BaseTestHttpFSWith will be added to prevent this from happening.


> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests 
> in BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597819#comment-16597819
 ] 

Ajay Kumar commented on HDDS-379:
-

Not a language expert either so this looks like subjective suggestion. I don't 
feel strongly about it but it its a compound word. 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Attachment: (was: HDFS-13838.002.patch)

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Status: Open  (was: Patch Available)

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.0.3, 3.1.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13879) FileSystem: Should allowSnapshot() and disallowSnapshot() be part of it?

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13879:
--
Affects Version/s: 3.1.1
 Target Version/s: 3.2.0

> FileSystem: Should allowSnapshot() and disallowSnapshot() be part of it?
> 
>
> Key: HDFS-13879
> URL: https://issues.apache.org/jira/browse/HDFS-13879
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Siyao Meng
>Priority: Major
>
> I wonder whether we should add allowSnapshot() and disallowSnapshot() to 
> FileSystem abstract class.
> I think we should because createSnapshot(), renameSnapshot() and 
> deleteSnapshot() are already part of it.
> Any reason why we don't want to do this?
> Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Attachment: HDFS-13838.002.patch
Status: Patch Available  (was: Reopened)

Changed the WenHDFS snapshotEnabledBit JSON key to "seBit" to keep it 
consistent with HttpFS SNAPSHOT_BIT_JSON = "seBit" in rev 002.

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.0.3, 3.1.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597775#comment-16597775
 ] 

Siyao Meng edited comment on HDFS-13838 at 8/30/18 6:48 PM:


Reopening this Jira because the JSON key introduced in WebHDFS 
"snapshotEnabled" is not consistent with HttpFS "seBit".


was (Author: smeng):
Reopening this Jira because the JSON key introduced in WebHDFS is not 
consistent with HttpFS.

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597775#comment-16597775
 ] 

Siyao Meng commented on HDFS-13838:
---

Reopening this Jira because the JSON key introduced in WebHDFS is not 
consistent with HttpFS.

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13838:
---
Fix Version/s: (was: 3.1.2)
   (was: 3.0.4)
   (was: 3.2.0)

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reopened HDFS-13838:
---

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597751#comment-16597751
 ] 

Anu Engineer commented on HDDS-379:
---

Not a language lawyer nor a native language speaker, But it looks like 
subcommand by convention is used as a single word. If we agree then initial 
usage of {{CreateSubcommand}} seems to be correct CamelCasing. 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597751#comment-16597751
 ] 

Anu Engineer edited comment on HDDS-379 at 8/30/18 6:24 PM:


Not a language lawyer nor a native language speaker, But it looks like 
subcommand by convention is used as a single word. If we agree then initial 
usage of {{CreateSubcommand}} seems to be correct CamelCasing.

 

Ref: https://en.wiktionary.org/wiki/subcommand


was (Author: anu):
Not a language lawyer nor a native language speaker, But it looks like 
subcommand by convention is used as a single word. If we agree then initial 
usage of {{CreateSubcommand}} seems to be correct CamelCasing. 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13830) Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting snasphottable directory list

2018-08-30 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597747#comment-16597747
 ] 

Siyao Meng commented on HDFS-13830:
---

Thanks [~jojochuang] for the review. And thanks [~templedf] for the comment.

I believe this patch is ready for commit. :D

> Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting 
> snasphottable directory list
> 
>
> Key: HDFS-13830
> URL: https://issues.apache.org/jira/browse/HDFS-13830
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13830.branch-3.0.001.patch, 
> HDFS-13830.branch-3.0.002.patch, HDFS-13830.branch-3.0.003.patch, 
> HDFS-13830.branch-3.0.004.patch
>
>
> HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.
> This Jira aims to backport the WebHDFS getSnapshottableDirListing() support 
> to branch-3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13830) Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting snasphottable directory list

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reopened HDFS-13830:
---

> Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting 
> snasphottable directory list
> 
>
> Key: HDFS-13830
> URL: https://issues.apache.org/jira/browse/HDFS-13830
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13830.branch-3.0.001.patch, 
> HDFS-13830.branch-3.0.002.patch, HDFS-13830.branch-3.0.003.patch, 
> HDFS-13830.branch-3.0.004.patch
>
>
> HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.
> This Jira aims to backport the WebHDFS getSnapshottableDirListing() support 
> to branch-3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13830) Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting snasphottable directory list

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13830:
--
Comment: was deleted

(was: [~jojochuang] Yeah. Thanks anyway!)

> Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting 
> snasphottable directory list
> 
>
> Key: HDFS-13830
> URL: https://issues.apache.org/jira/browse/HDFS-13830
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13830.branch-3.0.001.patch, 
> HDFS-13830.branch-3.0.002.patch, HDFS-13830.branch-3.0.003.patch, 
> HDFS-13830.branch-3.0.004.patch
>
>
> HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.
> This Jira aims to backport the WebHDFS getSnapshottableDirListing() support 
> to branch-3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-351) Add chill mode state to SCM

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597728#comment-16597728
 ] 

Ajay Kumar edited comment on HDDS-351 at 8/30/18 6:02 PM:
--

[~elek] thanks for review.

{quote}I don't understand why do we need a Map for exitRules if we use just the 
only one element everywhere. I think one single variable should be enough. Or I 
would expect at least some loops.
For example why don't we cleanup other exitRules just 
exitRules.get(CONT_EXIT_RULE).cleanup()?

And why do we need the CONT_EXIT_RULE? Are the keys of exitRules used 
somewhere? (Maybe I missed something...){quote}
To make it more generic to add more rules down the line. CONT_EXIT_RULE is 
added to avoid hardcoding string and increase readability.

{quote}Until know we tried to follow a practice where all the event wiring are 
defined in StorageContainerManager. The only one exception was the EventWatcher 
but we also discussed it with Lokesh that the wiring logic could be moved out 
from the EventWatcher constructor. To be honest I have no preference. The only 
thing what I would like to do is handle all the subscription logic with the 
same way. Either wire everything in the SCM class or move all the 
eventPublisher subscription logic to constructors.{quote}
Moved wiring to SCM constructor in patch v4.
{quote}Would be great to write a unit test for ContainerChillModeRule. I am not 
sure, but IMHO containerWithMinReplicas should be incremented only if the 
containerReplicaMap.get(c.getContainerID()) was empty. berfore. A unit test 
would convince me...
{quote}
Removed containerReplicaMap to simplify the patch. Test case in TestSCM tests 
exit logic.
{quote}Not clear why do you do containers.remove(c.getContainerID()); at 
SCMChillModeManager:L171{quote}
We want to track only containers whose replica is not reported yet.
{quote}Why did you remove: eventQueue.addHandler(SCMEvents.START_REPLICATION, 
replicationStatus); from SCM{quote} 
that seems like a mistake, mistook it as emitting of START_REPLICATION event. 
Reverted this in  new patch.


was (Author: ajayydv):
[~elek] thanks for review.

{quote}I don't understand why do we need a Map for exitRules if we use just the 
only one element everywhere. I think one single variable should be enough. Or I 
would expect at least some loops.
For example why don't we cleanup other exitRules just 
exitRules.get(CONT_EXIT_RULE).cleanup()?

And why do we need the CONT_EXIT_RULE? Are the keys of exitRules used 
somewhere? (Maybe I missed something...){quote}
To make it more generic to add more rules down the line. CONT_EXIT_RULE is 
added to avoid hardcoding string and increase readability.

{quote}Until know we tried to follow a practice where all the event wiring are 
defined in StorageContainerManager. The only one exception was the EventWatcher 
but we also discussed it with Lokesh that the wiring logic could be moved out 
from the EventWatcher constructor. To be honest I have no preference. The only 
thing what I would like to do is handle all the subscription logic with the 
same way. Either wire everything in the SCM class or move all the 
eventPublisher subscription logic to constructors.{quote}
Moved wiring to SCM constructor in patch v4.
{quote}Would be great to write a unit test for ContainerChillModeRule. I am not 
sure, but IMHO containerWithMinReplicas should be incremented only if the 
containerReplicaMap.get(c.getContainerID()) was empty. berfore. A unit test 
would convince me...
{quote}
Removed containerReplicaMap to simplify the patch. Test case in TestSCM tests 
exit logic.
{quote}Not clear why do you do containers.remove(c.getContainerID()); at 
SCMChillModeManager:L171{quote}
We want to track only containers whose replica is not reported yet.
{quote}Why did you remove: eventQueue.addHandler(SCMEvents.START_REPLICATION, 
replicationStatus); from SCM{quote} 
that seems like a mistake, mistook it as emitting of START_REPLICATION event.

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597735#comment-16597735
 ] 

Chao Sun commented on HDFS-13749:
-

Attached patch v0. IMO it's a little rough and I'm not sure if directly calling 
{{HAServiceProtocolClientSideTranslatorPB}} is a good idea here. I'll work on a 
improvement. Meanwhile comments are welcome.

Sorry for being late!

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Attachment: HDFS-13749-HDFS-12943.000.patch

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Attachment: (was: HDFS-13749-HDFS-12943.000.patch)

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Description: In HDFS-12976 currently we discover NameNode state by calling 
{{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
this by using {{HAServiceProtocol#getServiceStatus}}.  (was: Currently 
{{HAServiceProtocol#getServiceStatus}} requires super user privilege. 
Therefore, as a temporary solution, in HDFS-12976 we discover NameNode state by 
calling {{reportBadBlocks}}. Here, we'll properly implement this by adding a 
new method in client protocol to get the NameNode state.)

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-30 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Summary: Use getServiceStatus to discover observer namenodes  (was: 
Implement a new client protocol method to get NameNode state)

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> Currently {{HAServiceProtocol#getServiceStatus}} requires super user 
> privilege. Therefore, as a temporary solution, in HDFS-12976 we discover 
> NameNode state by calling {{reportBadBlocks}}. Here, we'll properly implement 
> this by adding a new method in client protocol to get the NameNode state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-08-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597728#comment-16597728
 ] 

Ajay Kumar commented on HDDS-351:
-

[~elek] thanks for review.

{quote}I don't understand why do we need a Map for exitRules if we use just the 
only one element everywhere. I think one single variable should be enough. Or I 
would expect at least some loops.
For example why don't we cleanup other exitRules just 
exitRules.get(CONT_EXIT_RULE).cleanup()?

And why do we need the CONT_EXIT_RULE? Are the keys of exitRules used 
somewhere? (Maybe I missed something...){quote}
To make it more generic to add more rules down the line. CONT_EXIT_RULE is 
added to avoid hardcoding string and increase readability.

{quote}Until know we tried to follow a practice where all the event wiring are 
defined in StorageContainerManager. The only one exception was the EventWatcher 
but we also discussed it with Lokesh that the wiring logic could be moved out 
from the EventWatcher constructor. To be honest I have no preference. The only 
thing what I would like to do is handle all the subscription logic with the 
same way. Either wire everything in the SCM class or move all the 
eventPublisher subscription logic to constructors.{quote}
Moved wiring to SCM constructor in patch v4.
{quote}Would be great to write a unit test for ContainerChillModeRule. I am not 
sure, but IMHO containerWithMinReplicas should be incremented only if the 
containerReplicaMap.get(c.getContainerID()) was empty. berfore. A unit test 
would convince me...
{quote}
Removed containerReplicaMap to simplify the patch. Test case in TestSCM tests 
exit logic.
{quote}Not clear why do you do containers.remove(c.getContainerID()); at 
SCMChillModeManager:L171{quote}
We want to track only containers whose replica is not reported yet.
{quote}Why did you remove: eventQueue.addHandler(SCMEvents.START_REPLICATION, 
replicationStatus); from SCM{quote} 
that seems like a mistake, mistook it as emitting of START_REPLICATION event.

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-08-30 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: (was: HDDS-351.04.patch)

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597729#comment-16597729
 ] 

Anu Engineer commented on HDDS-98:
--

+1, I will wait for Jenkins to come back before committing this. We might have 
to resubmit this patch on Monday to get a Jenkins run.

[~ajayydv] There is a temporary fix that builds the Success and Error messages 
until we move that into the structure as you suggested. I am ok with committing 
this, but I can commit this only by Tuesday/Wednesday due to Jenkins issues, so 
we might be able to fix it. Please let me know if you have any comments on this.

[~jnp] The format seems to be the way you have suggested. Please comment if you 
see any issues, otherwise, I will get this committed once Jenkins is back.



> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >