[jira] [Commented] (HDFS-14129) RBF : Create new policy provider for router

2018-12-05 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711046#comment-16711046
 ] 

Yiqun Lin commented on HDFS-14129:
--

+1 this.The new policy will be a sub-policy of {{HDFSPolicyProvider}} if I 
understand this correctly.

> RBF : Create new policy provider for router
> ---
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new ptotocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14129) RBF : Create new policy provider for router

2018-12-05 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14129:


Assignee: Ranith Sardar

> RBF : Create new policy provider for router
> ---
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new ptotocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14129) RBF : Create new policy provider for router

2018-12-05 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14129:
-

 Summary: RBF : Create new policy provider for router
 Key: HDFS-14129
 URL: https://issues.apache.org/jira/browse/HDFS-14129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-13532
Reporter: Surendra Singh Lilhore


Router is using *{{HDFSPolicyProvider}}*. We can't add new ptotocol in this 
class for router, its better to create in policy provider for Router.
{code:java}
// Set service-level authorization security policy
if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
}
{code}
I got this issue when I am verified HDFS-14079 with secure cluster.
{noformat}
./bin/hdfs dfsrouteradmin -ls /
ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
not known.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
not known.
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14124) EC : Support Directory Level EC Command (set/get/unset EC policy) through REST API

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710979#comment-16710979
 ] 

Hadoop QA commented on HDFS-14124:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
237 unchanged - 0 fixed = 238 total (was 237) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.sps.TestBlockStorageMovementAttemptedItems |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950782/HDFS-14124-03.patch |
| Optional Tests |  dupname  asflicense  compile  

[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710953#comment-16710953
 ] 

Hadoop QA commented on HDFS-14116:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
241 unchanged - 1 fixed = 243 total (was 242) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14116 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950779/HDFS-14116-HDFS-12943.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3d8fbfcdd0bc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 47d7260 |
| maven 

[jira] [Issue Comment Deleted] (HDFS-13923) Add a configuration to turn on/off observer reads

2018-12-05 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng updated HDFS-13923:
-
Comment: was deleted

(was: Hey [~csun],that's a intreasting ideas,I'm studying this question.if you 
like, please assign this question to me,i will try to solve this problem,thank 
you very much.)

> Add a configuration to turn on/off observer reads
> -
>
> Key: HDFS-13923
> URL: https://issues.apache.org/jira/browse/HDFS-13923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Priority: Major
>
> In some situations having a config to turn on/off observer reads dynamically 
> may be useful. For instance, some applications may be sensitive for data 
> freshness and want to always reach directly to active NN. In a more complex 
> scenario, services such as Presto may want to apply observer reads for 
> different types of queries. In this case, simply change 
> {{dfs.client.failover.proxy.provider.}} may not be enough, since 
> with FileSystem cache (which is usually turned on) will ignore the change and 
> still use the same FileSystem object.
> Here I'm proposing to add a flag in {{HdfsClientConfigKeys}}, such as 
> {{dfs.client.observer.reads.enabled}}, that can be used to dynamically turn 
> on/off observer reads. The FileSystem cache key should also take account of 
> this flag in its {{hashCode}} and {{equals}} impl, so that different 
> FileSystem objects will be used depending on the flag.
>  
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14124) EC : Support Directory Level EC Command (set/get/unset EC policy) through REST API

2018-12-05 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14124:

Attachment: HDFS-14124-03.patch

> EC : Support Directory Level EC Command (set/get/unset EC policy) through 
> REST API
> --
>
> Key: HDFS-14124
> URL: https://issues.apache.org/jira/browse/HDFS-14124
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, httpfs, webhdfs
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14124-01.patch, HDFS-14124-02.patch, 
> HDFS-14124-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14128) .Trash location

2018-12-05 Thread George Huang (JIRA)
George Huang created HDFS-14128:
---

 Summary: .Trash location
 Key: HDFS-14128
 URL: https://issues.apache.org/jira/browse/HDFS-14128
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.0.0
Reporter: George Huang
Assignee: George Huang


Currently some customer has users accounts that are functional ids (fid) to 
manage application and application data under the path /data/FID. These fid's 
also get a home directory under /user path. The user's home directories are 
limited with space quota 60 G. When these fids delete data, due to customer 
deletion policy they are placed in /user//.Trash location and run over 
quota.

For now they are increasing quotas for these functional users, but considering 
growing applications they would like the .Trash location to be configurable or 
something like  /trash/\{userid} that is owned by the user.

What should the configurable path look like to make this happen? For example, 
some thoughts may relate whether we want to configure it for per user or per 
cluster, etc.

Here is current behavior:

fs.TrashPolicyDefault: Moved: 'hdfs://ns1/user/hdfs/test/1.txt to trash at: 
hdfs://ns1/user/hdfs/.Trash/Current/user/hdfs/test/1.txt

for path under encryption zone:

fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/2.txt' to trash at 
hdfs://ns1/scale/.Trash/hdfs/Current/scale/2.txt

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710832#comment-16710832
 ] 

Chao Sun commented on HDFS-14116:
-

Attached the patch v0 for the first two issues. [~shv]: if you are OK, I'll 
open a JIRA for the last issue. It seems more involved than the first two.

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14116-HDFS-12943.000.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14116:

Attachment: HDFS-14116-HDFS-12943.000.patch

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14116-HDFS-12943.000.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14116:

Status: Patch Available  (was: Open)

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14116-HDFS-12943.000.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12943) Consistent Reads from Standby Node

2018-12-05 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12943:
---
Target Version/s: 3.3.0
  Status: Patch Available  (was: Open)

Submitting a unified patch for HDFS-12943 branch for review and for a Jenkins 
run.

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710795#comment-16710795
 ] 

Chao Sun edited comment on HDFS-14116 at 12/6/18 12:36 AM:
---

Agree the last issue seems a bit different, and I think it is not restricted to 
{{ObserverReadProxyProvider}}. If you change the above command to use 
{{RequestHedgingProxyProvider}} it will run into the same problem.


was (Author: csun):
Agree this seems like a separate issue, and I think this problem is not 
restrict to {{ObserverReadProxyProvider}}. If you change the above command to 
use {{RequestHedgingProxyProvider}} it will run into the same issue.

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12943) Consistent Reads from Standby Node

2018-12-05 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12943:
---
Attachment: HDFS-12943-001.patch

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710795#comment-16710795
 ] 

Chao Sun commented on HDFS-14116:
-

Agree this seems like a separate issue, and I think this problem is not 
restrict to {{ObserverReadProxyProvider}}. If you change the above command to 
use {{RequestHedgingProxyProvider}} it will run into the same issue.

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710791#comment-16710791
 ] 

Chen Liang commented on HDFS-14116:
---

Thanks for the finding [~shv], seems this error is different though. Because 
this is triggering a different code path, causing a different class casting. 
For fsck to work, as a current workaround, we can override configs to use 
configured proxy provider in the fsck command. For example if we have 
fs.defaultFS=ns1, we can call fsck as
{code}
hdfs fsck -Ddfs.client.failover.proxy.provider.ns1= \
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider  
hdfs://ns1/
{code}
Still need to fix this though.

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-05 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710786#comment-16710786
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq.  Please note that the hadoop-ozone and hadoop-hdds in the current trunk 
doesn't depend on the in-tree hadoop-hdfs/common projects. It depends on the 
hadoop-3.2-SNAPSHOT as of now and we would like to switch to a stable release 
as soon as possible.

That won't always be the case unless Ozone becomes its own project.  There's no 
value in creating technical debt here.

bq. Technically it's possible

Yup: HDDS-146, for example, changed start-build-env.sh (and introduced a bug).  
So clearly there is still some dependence despite everything said above.

bq. b): I like the idea and I tried to implement it. Would you be so kind to 
review the v2 patch of YETUS-631?

YETUS-631 isn't an implementation of b at all.  [and, FWIW, I'm going to reject 
that patch.  I've got a better fix as part of  YETUS-723.]

bq. I checked the last two commits: If I understood well there was an 
additional property in the root pom.xml for ozone version (low risk) and with 
the last commit it was removed, so the parent pom.xml shouldn't be modified any 
more. 

Irrelevant.  A change is a change.  There is no way to guarantee that further 
changes won't leak outside these two modules short of not having any other code 
in the branch/repo.

My binding vote remains -1.

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710755#comment-16710755
 ] 

Hadoop QA commented on HDDS-115:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDDS-4 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} root: The patch generated 15 new + 3 unchanged - 
0 fixed = 18 total (was 3) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
11s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.om.TestOmMetrics |
|   | hadoop.ozone.container.metrics.TestContainerMetrics |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerWithTLS |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950760/HDDS-115-HDDS-4.006.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 32ab14d72037 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | HDDS-4 / 5227f75 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1881/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1881/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1881/testReport/ |
| Max. process+thread count | 1133 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/container-service hadoop-ozone hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
| 

[jira] [Work started] (HDDS-902) MultipartUpload: S3 API for uploading a part file

2018-12-05 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-902 started by Bharat Viswanadham.
---
> MultipartUpload: S3 API for uploading a part file
> -
>
> Key: HDDS-902
> URL: https://issues.apache.org/jira/browse/HDDS-902
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is created to track the work required for Uploading a part.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS

2018-12-05 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710705#comment-16710705
 ] 

Xiaoyu Yao commented on HDDS-115:
-

Fix a bunch of unit test failures, some are not specific to this patch.

> GRPC: Support secure gRPC endpoint with mTLS 
> -
>
> Key: HDDS-115
> URL: https://issues.apache.org/jira/browse/HDDS-115
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-115-HDDS-4.001.patch, HDDS-115-HDDS-4.002.patch, 
> HDDS-115-HDDS-4.003.patch, HDDS-115-HDDS-4.004.patch, 
> HDDS-115-HDDS-4.005.patch, HDDS-115-HDDS-4.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS

2018-12-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-115:

Attachment: HDDS-115-HDDS-4.006.patch

> GRPC: Support secure gRPC endpoint with mTLS 
> -
>
> Key: HDDS-115
> URL: https://issues.apache.org/jira/browse/HDDS-115
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-115-HDDS-4.001.patch, HDDS-115-HDDS-4.002.patch, 
> HDDS-115-HDDS-4.003.patch, HDDS-115-HDDS-4.004.patch, 
> HDDS-115-HDDS-4.005.patch, HDDS-115-HDDS-4.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710699#comment-16710699
 ] 

Jitendra Nath Pandey commented on HDDS-870:
---

There seem to be some inconsistencies between int and long. At some places we 
use {{long}} to track size of buffered data, while some methods treat it as 
{{int}}. For example {{computeBufferData}} returns an {{int}}.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710668#comment-16710668
 ] 

Bharat Viswanadham commented on HDDS-864:
-

{quote}As this patch is big enough I propose to do it in a next (smaller) patch.
{quote}
I am fine with doing for S3Table in further Jira's.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710667#comment-16710667
 ] 

Bharat Viswanadham edited comment on HDDS-864 at 12/5/18 10:02 PM:
---

Thank You [~elek]

for the updated patch.

Patch almost LGTM.

Few comments I have are:
 # VolumeManagerImpl.java:

Line 246: VolumeInfo newVolumeInfo = newVolumeArgs.getProtobuf(); can be 
removed, as it is not being used.
 
Line 247: metadataManager.getVolumeTable().put(dbVolumeKey, volumeArgs);
Here it should be updated as below
metadataManager.getVolumeTable().put(dbVolumeKey, newVolumeArgs);
 
In createVolume(), we are writing args in to volumeTable

metadataManager.getVolumeTable().putWithBatch(batch,
    dbVolumeKey, args);
But we are not setting creation time, as the original args will not have 
creation time set. That is the reason for some of tests failure in 
TestOzoneRpcClient.


2. For OmKeyInfoCodec and other in toPersistedFormat(), As discussed offline, 
we can have Precondition check for null, as technically we should not get Null 
over there (This will also help in identifying bugs if we are passing null 
values). Have a look into it we can add here.

3. I think 1, 2 and if it is fixed some of the Unit tests will be fixed. Have a 
look into other tests, if those are real issues.


was (Author: bharatviswa):
Thank You [~elek]

for the updated patch.

Patch almost LGTM.

Few comments I have are:
 # 
VolumeManagerImpl.java:

Line 246: VolumeInfo newVolumeInfo = newVolumeArgs.getProtobuf(); can be 
removed, as it is not being used.
 
Line 247: metadataManager.getVolumeTable().put(dbVolumeKey, volumeArgs);
Here it should be updated as below
metadataManager.getVolumeTable().put(dbVolumeKey, newVolumeArgs);
  # 
In createVolume(), we are writing args in to volumeTable

metadataManager.getVolumeTable().putWithBatch(batch,
    dbVolumeKey, args);
But we are not setting creation time, as the original args will not have 
creation time set. That is the reason for some of tests failure in 
TestOzoneRpcClient
  # 
For OmKeyInfoCodec and other in toPersistedFormat(), As discussed offline, we 
can have Precondition check for null, as technically we should not get Null 
over there (This will also help in identifying bugs if we are passing null 
values). Have a look into it we can add here. 
 # 
I think 1, 2 and if it is fixed some of the Unit tests will be fixed. Have a 
look into other tests, if those are real issues.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710667#comment-16710667
 ] 

Bharat Viswanadham commented on HDDS-864:
-

Thank You [~elek]

for the updated patch.

Patch almost LGTM.

Few comments I have are:
 # 
VolumeManagerImpl.java:

Line 246: VolumeInfo newVolumeInfo = newVolumeArgs.getProtobuf(); can be 
removed, as it is not being used.
 
Line 247: metadataManager.getVolumeTable().put(dbVolumeKey, volumeArgs);
Here it should be updated as below
metadataManager.getVolumeTable().put(dbVolumeKey, newVolumeArgs);
  # 
In createVolume(), we are writing args in to volumeTable

metadataManager.getVolumeTable().putWithBatch(batch,
    dbVolumeKey, args);
But we are not setting creation time, as the original args will not have 
creation time set. That is the reason for some of tests failure in 
TestOzoneRpcClient
  # 
For OmKeyInfoCodec and other in toPersistedFormat(), As discussed offline, we 
can have Precondition check for null, as technically we should not get Null 
over there (This will also help in identifying bugs if we are passing null 
values). Have a look into it we can add here. 
 # 
I think 1, 2 and if it is fixed some of the Unit tests will be fixed. Have a 
look into other tests, if those are real issues.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-12-05 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-14059.

Resolution: Done

Thanks [~zero45]! Some cool tests there with multiple observers. Great there 
were no problem with DTs and failover.
We still have an outstanding issue to support automatic failover with ZKFC. 
Will create a jira for that.
Closing this one as done.

> Test reads from standby on a secure cluster with Configured failover
> 
>
> Key: HDFS-14059
> URL: https://issues.apache.org/jira/browse/HDFS-14059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-12-05 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710648#comment-16710648
 ] 

Hanisha Koneru commented on HDDS-858:
-

Pinging [~anu], [~msingh] for review please.

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14058) Test reads from standby on a secure cluster with IP failover

2018-12-05 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-14058.

Resolution: Done

Thanks, [~vagarychen]!
This was a lot of testing. With all related issues resolved and retested I 
think we can close this one.
Load testing and performance tuning should go into the next step.

> Test reads from standby on a secure cluster with IP failover
> 
>
> Key: HDFS-14058
> URL: https://issues.apache.org/jira/browse/HDFS-14058
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: dfsio_crs.no-crs.txt, dfsio_crs.with-crs.txt
>
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{IPFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-872) Add Dockerfile and skaffold config to deploy ozone dev build to k8s

2018-12-05 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710611#comment-16710611
 ] 

Anu Engineer commented on HDDS-872:
---

+1, thanks for the addition of this feature. I appreciate it.

 

> Add Dockerfile and skaffold config to deploy ozone dev build to k8s
> ---
>
> Key: HDDS-872
> URL: https://issues.apache.org/jira/browse/HDDS-872
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-872.001.patch, HDDS-872.002.patch, 
> HDDS-872.003.patch
>
>
> Skaffold is a simple helper tool to
> 1) Build docker image
> 2) publish the image to a docker repo 
> 3) update k8s resources files to use the uploaded image
> I propose to upload some example k8s resources + skaffold.yaml + dockerfile 
> to make it easier to deploy the dev server to a k8s cluster.
> As a first step I would add it only to the hadoop-ozone/dist/src folders 
> (which is not part of the distribution). After some experimental dev usage we 
> can also provide k8s resources as part of the dev build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-851) Provide official apache docker image for Ozone

2018-12-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710602#comment-16710602
 ] 

Elek, Marton commented on HDDS-851:
---

[~jnpandey] Would you be so kind to help to create the repository?

> Provide official apache docker image for Ozone
> --
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz, ozonedocker.png
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files with the same approach:
> {code}
> docker run apache/ozone cat k8s.yaml | kubectl apply -f -
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710596#comment-16710596
 ] 

Konstantin Shvachko commented on HDFS-14116:


Noticed the same problem with fsck
{code}
Exception in thread "main" java.lang.ClassCastException: 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider$ObserverReadInvocationHandler
 cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:666)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:449)
at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:667)
at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:650)
at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:268)
at 
org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:266)
at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:332)
at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:159)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:155)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:402)
[{code}

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710528#comment-16710528
 ] 

Hadoop QA commented on HDDS-864:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} root: The patch generated 0 new + 1 unchanged - 3 
fixed = 1 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 37s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
47s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.client.rest.TestOzoneRestClient |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950741/HDDS-864.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 8b0382b9ae60 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 228156c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1880/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1880/testReport/ |
| Max. process+thread count | 1128 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1880/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: 

[jira] [Created] (HDDS-903) Compile OzoneManager Ratis protobuf files with proto3 compiler

2018-12-05 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-903:
---

 Summary: Compile OzoneManager Ratis protobuf files with proto3 
compiler
 Key: HDDS-903
 URL: https://issues.apache.org/jira/browse/HDDS-903
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Ratis requires the payload to be proto3 ByteString. If the OM ratis protos are 
compiled using proto2, we would have to make the conversion to proto3 for every 
request to be submitted to Ratis. Instead, we can compile OM's Ratis protocol 
files using proto3 compiler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710480#comment-16710480
 ] 

Elek, Marton commented on HDDS-891:
---

Thanks the feedback [~aw].

Please note that the hadoop-ozone and hadoop-hdds in the current trunk doesn't 
depend on the in-tree hadoop-hdfs/common projects. It depends on the 
hadoop-3.2-SNAPSHOT as of now and we would like to switch to a stable release 
as soon as possible.

Therefore it's not possible to modify both hdds/ozone AND hadoop in the same 
patch. Technically it's possible, but ozone/hdds code won't see the new changes 
in hdfs/common projects as they are 3.3-SNAPSHOT.

We can add additional check that HDDS issues to ensure that the patches modify 
only the ozone/hdds subprojects and not the core hadoop/hdfs/common projects. 
With this modification I think it's safe to do a full ozone/hdds only build for 
hdds precommit hook. What do you think about it?

about b): I like the idea and I tried to implement it. Would you be so kind to 
review the v2 patch of YETUS-631?

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710489#comment-16710489
 ] 

Elek, Marton commented on HDDS-891:
---

bq.  (e.g., the last two changes to /pom.xml came from HDDS!).

I checked the last two commits: If I understood well there was an additional 
property in the root pom.xml for ozone version (low risk) and with the last 
commit it was removed, so the parent pom.xml shouldn't be modified any more. 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14116) Fix a potential class cast error in ObserverReadProxyProvider

2018-12-05 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reassigned HDFS-14116:
---

Assignee: Chao Sun

> Fix a potential class cast error in ObserverReadProxyProvider
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-12-05 Thread Plamen Jeliazkov (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710454#comment-16710454
 ] 

Plamen Jeliazkov commented on HDFS-14059:
-

Apologies for the delay on this -- I ran into some difficulty in setup -- 
anyway, here is my testing experience:

I tested in GCP with 6 nodes, all CentOS 7, 3 NameNodes (active + standby + 
obs), 3 DNs/NMs, in secure mode with 1 RM, 1 ZK, and (initially) ZKFCs on each 
NN that I eventually terminated.

I tested initially as the hdfs user and then switched over to my own user and 
principal once things were stable:
I ran the following simple scenarios:

TestDFSIO - writing 3 x 5GB files. With a manual failover between the active 
and standby. 
TestDFSIO - appending 3 x 1GB files. With a manual failover between the active 
and standby. 
TestDFSIO - reading 3 x 5GB files. With 2 standby NNs and only the observer 
active during the read mappers -- it completed and went to reduce phase and 
needed active NN to complete.
TeraGen/TeraSort/TeraValidate - no failover. During TeraValidate phase I ran 
with 2 observers and no active.

So from my end things look good as far as ConfiguredFailoverProxy is concerned. 
I never experienced anything concerning around DelegationTokens and could 
confirm seeing them being generated and used by jobs.

> Test reads from standby on a secure cluster with Configured failover
> 
>
> Key: HDFS-14059
> URL: https://issues.apache.org/jira/browse/HDFS-14059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-12-05 Thread Plamen Jeliazkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14059 started by Plamen Jeliazkov.
---
> Test reads from standby on a secure cluster with Configured failover
> 
>
> Key: HDFS-14059
> URL: https://issues.apache.org/jira/browse/HDFS-14059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-864:
--
Attachment: HDDS-864.003.patch

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710428#comment-16710428
 ] 

Íñigo Goiri commented on HDFS-13443:


[^HDFS-13443-014.patch] looks prety much it.

* {{RouterAdminServer}} we could extract 
{{this.router.getSubclusterResolver()}} into a variable.
* {{RouterAdminServer#refreshMountTableEntries}} the big comment shouldn't be a 
javadoc but a regular comment.
* The javadoc in {{MountTableRefresherService#routerClientsCache}} will 
probably give issues with javadoc using {{>}}.
* {{MountTableRefresherService#getClientCacheCleaner}} shouldn't have a javadoc 
but a regular comment inside. This would actually be a place to use a lambda 
for runnable instead of a function.
* {{MountTableRefresherService#refresh}} could use {{else if}} instead of the 
continue all the time. You could also put the variable and the add inside the 
try and not having to do a continue in the catch.
* Not sure if {{MountTableRefresherService#invokeRefresh}} needs 
{{allReqCompleted}} as inside of logResult you go through all the results. That 
would also cleanup the code a little by keeping the try isolated.
* {{MountTableRefresherThread#adminAddress}} comment should be a javadoc {{/** 
Admin server on which refreshed to be invoked. */}}.
* In {{Router}}, the log line for starting the service should start in capitals.
* Avoid the import change for VisibleForTesting in {{RouterHeartbeatService}}; 
avoid churn basically.
* {{testRouterClientConnectionExpiration}} after the waitFor, there is no need 
to assert as the waitFor would already fail. We should use a lambda too.
* You could make {{clientCacheTime}} an integer an save the int cast.

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, 
> HDFS-13443.009.patch, HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-05 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710384#comment-16710384
 ] 

Allen Wittenauer commented on HDDS-891:
---

-1

I'm going to stop you from shooting yourself in the foot here.  

bq. But in Ozone:

That list may be true for patches that only modify hadoop-ozone or hadoop-hdds 
maven modules, but patches uploaded to the HDDS project sometimes hit more than 
just those modules (e.g., the last two changes to /pom.xml came from HDDS!). 
Plus the union between those two modules is the root of the tree which means it 
really is building everything.

It's important to also remember that hadoop-ozone and hadoop-hdds are fair game 
to be modified in by other JIRA projects as well.  

Consider some other things:

a) Re-arrange the source such that there aren't two children off of / such that 
modifying both modules won't trigger a full build.
b) Modify the existing personality to slap -Phdds when the changed files list 
includes hadoop-ozone and hadoop-hdds. 
c) It should probably also trigger a native library build. (There's a 
tremendous amount of inconsistency with the test runs presently.)
d) Modify the personality to skip shaded if the patch ONLY modifies 
hadoop-ozone/hadoop-hdds

With the exception of the first, these are all pretty trivial to do and have a 
way higher chance of success.

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710364#comment-16710364
 ] 

Hadoop QA commented on HDDS-870:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 27s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
46s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950726/HDDS-870.007.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 0ceaec6a1d47 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 228156c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1879/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1879/testReport/ |
| Max. process+thread count | 1061 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1879/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: 

[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710361#comment-16710361
 ] 

Elek, Marton commented on HDDS-864:
---

Thanks the detailed review [~bharatviswa]

bq. We have not converted getS3Table()is this intentional or any reason for it 
to leave?

No exact reason. I started with converting only the key/volume/buckets table in 
the first phase. After a while I realized that I can't do it without converting 
a few additional tables (eg. open key). S3 table was independent. 

As this patch is big enough I propose to do it in a next (smaller) patch.

All the others will be part the next patch. Right now I am fixing the 
remaining, will be uploaded soon...

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710301#comment-16710301
 ] 

Mukul Kumar Singh commented on HDDS-870:


patch v7 fixes the review comments.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-870:
---
Attachment: HDDS-870.007.patch

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-870:
---
Attachment: (was: HDDS-876.003.patch)

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-870:
---
Attachment: HDDS-876.003.patch

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710289#comment-16710289
 ] 

Hadoop QA commented on HDDS-870:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} root: The patch generated 3 new + 3 unchanged - 
0 fixed = 6 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 59s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
22s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950719/HDDS-870.006.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 62016ae9316c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 228156c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1878/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1878/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1878/testReport/ |
| Max. process+thread count | 1056 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1878/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  

[jira] [Updated] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-05 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-870:
---
Attachment: HDDS-870.006.patch

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14127) Add a description about the observer read configuration

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709995#comment-16709995
 ] 

Hadoop QA commented on HDFS-14127:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14127 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950671/HDFS-123.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 683dc4bccc74 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 228156c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25719/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25719/testReport/ |
| Max. process+thread count | 4665 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25719/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add a description about the observer read configuration
> ---
>
> Key: HDFS-14127
> URL: https://issues.apache.org/jira/browse/HDFS-14127
> Project: Hadoop HDFS
> 

[jira] [Commented] (HDFS-14124) EC : Support Directory Level EC Command (set/get/unset EC policy) through REST API

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709936#comment-16709936
 ] 

Hadoop QA commented on HDFS-14124:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
237 unchanged - 0 fixed = 238 total (was 237) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950664/HDFS-14124-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6ba077769864 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 228156c |
| maven | version: Apache 

[jira] [Commented] (HDDS-896) Handle over replicated containers in SCM

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709919#comment-16709919
 ] 

Hadoop QA commented on HDDS-896:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 59s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
31s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManagerRestInterface |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-896 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950672/HDDS-896.000.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux faca78d85f31 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 228156c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1877/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1877/testReport/ |
| Max. process+thread count | 1085 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1877/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the 

[jira] [Updated] (HDDS-896) Handle over replicated containers in SCM

2018-12-05 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-896:
-
Attachment: HDDS-896.000.patch

> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the number of replicas to match the required value. If the 
> container is in QUASI_CLOSED state, we should check the {{originNodeId}} 
> field while choosing the replica to delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709857#comment-16709857
 ] 

Hadoop QA commented on HDFS-13443:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
49s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13443 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950652/HDFS-13443-014.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  xml  |
| uname | Linux 41796a37a915 

[jira] [Assigned] (HDDS-893) pipeline status is ALLOCATED in scmcli listPipelines command

2018-12-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-893:


Assignee: Lokesh Jain

> pipeline status is ALLOCATED in scmcli listPipelines command
> 
>
> Key: HDDS-893
> URL: https://issues.apache.org/jira/browse/HDDS-893
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: all-node-ozone-logs-1543837271.tar.gz
>
>
> Pipeline status cannot be allocated , It should be either OPEN or CLOSING.
> {noformat}
> [root@ctr-e139-1542663976389-11261-01-05 test_files]# ozone scmcli 
> listPipelines
> Pipeline[ Id: 202f7208-6977-4f65-b070-c1e7e57cb2ed, Nodes: 
> 06e074f7-67b4-4dde-8f20-a437ca60b7a1{ip: 172.27.20.97, host: 
> ctr-e139-1542663976389-11261-01-07.hwx.site}c5bf9a9f-d471-4cef-aae4-61cb387ea9e3{ip:
>  172.27.79.145, host: 
> ctr-e139-1542663976389-11261-01-06.hwx.site}96c18fe3-5520-4941-844b-ff7186a146a6{ip:
>  172.27.55.132, host: ctr-e139-1542663976389-11261-01-03.hwx.site}, 
> Type:RATIS, Factor:THREE, State:ALLOCATED]{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14127) Add a description about the observer read configuration

2018-12-05 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng updated HDFS-14127:
-
Affects Version/s: 3.3.0
   Status: Patch Available  (was: Open)

> Add a description about the observer read configuration
> ---
>
> Key: HDFS-14127
> URL: https://issues.apache.org/jira/browse/HDFS-14127
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: xiangheng
>Priority: Minor
> Attachments: HDFS-123.000.patch
>
>
> The lack of description of observer reader configuration in hdfs-default.xml 
> ,That can easily lead users to configure observer read mode to a normal HA 
> mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14127) Add a description about the observer read configuration

2018-12-05 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng updated HDFS-14127:
-
Attachment: HDFS-123.000.patch

> Add a description about the observer read configuration
> ---
>
> Key: HDFS-14127
> URL: https://issues.apache.org/jira/browse/HDFS-14127
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xiangheng
>Priority: Minor
> Attachments: HDFS-123.000.patch
>
>
> The lack of description of observer reader configuration in hdfs-default.xml 
> ,That can easily lead users to configure observer read mode to a normal HA 
> mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-896) Handle over replicated containers in SCM

2018-12-05 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-896:
-
Status: Patch Available  (was: Open)

> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the number of replicas to match the required value. If the 
> container is in QUASI_CLOSED state, we should check the {{originNodeId}} 
> field while choosing the replica to delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14127) Add a description about the observer read configuration

2018-12-05 Thread xiangheng (JIRA)
xiangheng created HDFS-14127:


 Summary: Add a description about the observer read configuration
 Key: HDFS-14127
 URL: https://issues.apache.org/jira/browse/HDFS-14127
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: xiangheng


The lack of description of observer reader configuration in hdfs-default.xml 
,That can easily lead users to configure observer read mode to a normal HA mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-876) add blockade tests for flaky network

2018-12-05 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-876:
--

Assignee: Nilotpal Nandi  (was: Mukul Kumar Singh)

> add blockade tests for flaky network
> 
>
> Key: HDDS-876
> URL: https://issues.apache.org/jira/browse/HDDS-876
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-876.001.patch, HDDS-876.002.patch, 
> HDDS-876.003.patch
>
>
> Blockade is a container utility to simulate network and node failures and 
> network partitions. https://blockade.readthedocs.io/en/latest/guide.html.
> This jira proposes to add a simple test to test freon with a flaky network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14124) EC : Support Directory Level EC Command (set/get/unset EC policy) through REST API

2018-12-05 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14124:

Attachment: HDFS-14124-02.patch

> EC : Support Directory Level EC Command (set/get/unset EC policy) through 
> REST API
> --
>
> Key: HDFS-14124
> URL: https://issues.apache.org/jira/browse/HDFS-14124
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, httpfs, webhdfs
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14124-01.patch, HDFS-14124-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org