[jira] [Commented] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-19 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585458#comment-16585458
 ] 

Vinayakumar B commented on HDFS-13772:
--

bq. For practicality, I think we can consider this jira a bug fix - which makes 
things more accurate. As far as audits are concerned, because previously we 
always log an event with success==true, IMO if we fix it the same way as 
renameTo, so that only succeeded ops log an event (with success==true), we 
should be fine. Logging success==false would be a new introduction and be 
avoided when we still can.
Right. I think we can handle {{enable/disableErasureCodingPolicy()}} rpcs in 
this jira. For remaining (especially admin commands) can be discussed in 
separate jira.

bq. But compat is compat. So if you disagree with the above approach and think 
we should be 100% consistent, I'm ok with always logging the audit with true 
regardless of the actual return value
Here is the current procedure for audit logging (based on seeing the user RPCs 
in FSNameSystem).
1. If AccessControlException, log audit with allowed=false.
2. If No AccessControlException and if operation is success, then log audit 
with allowed=true. If operation is NOT success, then skip audit log.

Code pattern will be as below.
{code}
boolean success=false;
try {
  //.. do op
  success=true
} catch (AccessControlException ace){
  logAudit(false);
} finally {
  if(success){
logAudit(true);
  }
}
{code}
Most of the RPCs where {{logAudit(true)}} is outside {{finally()}} will throw 
exception in case of failure in operation.  So audit will not get logged in 
case of failure.

For this Jira,
So you are right. We can catch {{AccessControlException}}, {{logAudit(false)}} 
and in finally based on {{success}}, {{logAudit(true)}}.

[~ayushtkn], Please update the patch as mentioned.

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03 .patch, HDFS-13772-04.patch, 
> HDFS-13772-05.patch, HDFS-13772-06.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585444#comment-16585444
 ] 

genericqa commented on HDFS-13821:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
4s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936215/HDFS-13821.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7dd4d7aef87e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4aacbff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24810/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24810/testReport/ |
| Max. process+thread count | 953 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Updated] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-356:
--
Attachment: HDDS-356.001.patch

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-356.001.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-356:
--
Status: Patch Available  (was: Open)

> Support ColumnFamily based RockDBStore and TableStore
> -
>
> Key: HDDS-356
> URL: https://issues.apache.org/jira/browse/HDDS-356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-356.001.patch
>
>
> This is to minimize the performance impacts of the expensive RocksDB table 
> scan problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13772:

Attachment: HDFS-13772-06.patch

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03 .patch, HDFS-13772-04.patch, 
> HDFS-13772-05.patch, HDFS-13772-06.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13817) RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't work properly

2018-08-19 Thread Harshakiran Reddy (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585443#comment-16585443
 ] 

Harshakiran Reddy commented on HDFS-13817:
--

[~linyiqun], thanks for Watching this issue and I haven't seen any other error 
log that time.

> RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't 
> work properly 
> -
>
> Key: HDFS-13817
> URL: https://issues.apache.org/jira/browse/HDFS-13817
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {{Scenario:-}} 
> # Create a mount point with RANDOM policy and with 2 Nameservices .
> # List the target mount path of the Global path.
> Actual Output: 
> === 
> {{ls: `/apps5': No such file or directory}}
> Expected Output: 
> =
> {{if the files are availabel list those files or if it's emtpy it will disply 
> nothing}}
> {noformat} 
> bin> ./hdfs dfsrouteradmin -add /apps5 hacluster,ns2 /tmp10 -order RANDOM 
> -owner securedn -group hadoop
> Successfully added mount point /apps5
> bin> ./hdfs dfs -ls /apps5
> ls: `/apps5': No such file or directory
> bin> ./hdfs dfs -ls /apps3
> Found 2 items
> drwxrwxrwx   - user group 0 2018-08-09 19:55 /apps3/apps1
> -rw-r--r--   3   - user group  4 2018-08-10 11:55 /apps3/ttt
>  {noformat}
> {{please refer the bellow image for mount inofrmation}}
> {{/apps3 tagged with HASH policy}}
> {{/apps5 tagged with RANDOM policy}}
> {noformat}
> /bin> ./hdfs dfsrouteradmin -ls
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /apps3hacluster->/tmp3,ns2->/tmp4 securedn
>   users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /apps5hacluster->/tmp5,ns2->/tmp5 securedn
>   users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-19 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585430#comment-16585430
 ] 

Xiao Chen commented on HDFS-13772:
--

Good question, and I agree the broader audit consistency topic to be done in a 
separate jira.

For practicality, I think we can consider this jira a bug fix - which makes 
things more accurate. As far as audits are concerned, because previously we 
always log an event with success==true, IMO if we fix it the same way as 
{{renameTo}}, so that only succeeded ops log an event (with success==true), we 
should be fine. Logging success==false would be a new introduction and be 
avoided when we still can.

But compat is compat. So if you disagree with the above approach and think we 
should be 100% consistent, I'm ok with always logging the audit with {{true}} 
regardless of the actual return value

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03 .patch, HDFS-13772-04.patch, 
> HDFS-13772-05.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-19 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585401#comment-16585401
 ] 

Fei Hui commented on HDFS-13821:


Upload v005 fix checkstyle

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, HDFS-13821.004.patch, HDFS-13821.005.patch, 
> LocalCacheTest.java, image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-19 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13821:
---
Attachment: HDFS-13821.005.patch

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, HDFS-13821.004.patch, HDFS-13821.005.patch, 
> LocalCacheTest.java, image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-19 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585396#comment-16585396
 ] 

Vinayakumar B commented on HDFS-13772:
--

bq. For consistency among NN RPCs, the success==false in audits should mean 
AccessControlException. So I suggest we move the logAuditEvent line into the if 
(success) block, to be consistent with other calls.
That's good catch. [~xiaochen]. I am seeing similar code in many other RPC 
calls (Logging op result as 'allowed' flag for audit). I think there is a Jira 
required to bring everything under consistency. Do you think, this will be a 
incompatible change?

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03 .patch, HDFS-13772-04.patch, 
> HDFS-13772-05.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585386#comment-16585386
 ] 

genericqa commented on HDFS-13821:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
26s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936210/HDFS-13821.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux abf33119fcec 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4aacbff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24807/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24807/testReport/ |
| Max. process+thread count | 966 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Updated] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-19 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13821:
---
Attachment: HDFS-13821.004.patch

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, HDFS-13821.004.patch, LocalCacheTest.java, 
> image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-19 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585348#comment-16585348
 ] 

Fei Hui commented on HDFS-13821:


[~elgoiri] Thanks. Change the default value and add a unit test according to 
your suggestions.
Upload v004 patch

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, LocalCacheTest.java, image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13805) Journal Nodes should allow to format non-empty directories with "-force" option

2018-08-19 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585341#comment-16585341
 ] 

Surendra Singh Lilhore commented on HDFS-13805:
---

Failed test is not related to this jira.

> Journal Nodes should allow to format non-empty directories with "-force" 
> option
> ---
>
> Key: HDFS-13805
> URL: https://issues.apache.org/jira/browse/HDFS-13805
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha4
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13805.001.patch, HDFS-13805.002.patch
>
>
> HDFS-2 completely restricted to re-format journalnode, but it should be 
> allowed when *"-force"* option is given. If user feel force option can 
> accidentally delete the data then he can disable it by configuring 
> "*dfs.reformat.disabled*"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-08-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585340#comment-16585340
 ] 

genericqa commented on HDFS-13834:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
44s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936202/HDFS-13834.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c8b72338c99 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4aacbff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24806/testReport/ |
| Max. process+thread count | 967 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24806/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Connection creator thread should 

[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-08-19 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585316#comment-16585316
 ] 

CR Hota commented on HDFS-13834:


Uploaded new patch with "checkstyle" corrected.

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Attachments: HDFS-13834.0.patch, HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-08-19 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13834:
---
Attachment: HDFS-13834.1.patch

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Attachments: HDFS-13834.0.patch, HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-19 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585247#comment-16585247
 ] 

CR Hota commented on HDFS-13634:


[~elgoiri] Thanks for reviewing this.

I too believe there is no unit test needed for this.

Class field is needed to help log the error scenario when more connection 
requests are getting into the queue than the capacity. This below line

LOG.error("Cannot add more than {} connections at the same time",
 MAX_NEW_CONNECTIONS);

 

> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13634.0.patch, HDFS-13634.1.patch
>
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585132#comment-16585132
 ] 

genericqa commented on HDDS-353:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 49s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.freon.TestFreon |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestReplicateContainerHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-353 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585105#comment-16585105
 ] 

Lokesh Jain commented on HDDS-353:
--

v2 patch fixes the test failure. I have also changed 
HddsServerUtil#getScmHeartbeatInterval to get time duration in milli seconds. 
This is required as in some of the block deletion tests the heartbeat interval 
is set in milli seconds. This change brought in some other changes as well.

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch, HDDS-353.002.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-353:
-
Attachment: HDDS-353.002.patch

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch, HDDS-353.002.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13815) RBF: Add check to order command

2018-08-19 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585050#comment-16585050
 ] 

Ranith Sardar commented on HDFS-13815:
--

[~linyiqun], thank you for your suggestions. I was thinkng to raise a new jira 
for -add as the same problem is there also.

 

Okay, according to your comments, I will update in next patch.

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-13815-001.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585038#comment-16585038
 ] 

Lokesh Jain edited comment on HDDS-325 at 8/19/18 6:42 AM:
---

[~elek] Thanks for reviewing the patch! I have uploaded v4 patch based on our 
discussion. v4 patch can be applied after applying the patch in HDDS-353. The 
changes related to event can be found in classes RetriableDatanodeEventWatcher, 
SCMEvents and StorageContainerManager.

I had to add the changes in EventWatcher as well. The changes add capability of 
watching multiple events in a single watcher. The reason I had to add the 
change was because currently we have a single event type 
RETRIABLE_DATANODE_COMMAND for datanode command events which need to be 
retried. If we create multiple event watchers for the same start event and 
different completion events then we will be adding multiple handlers(via 
multiple event watchers) for the same event type. This would lead to multiple 
handlers retrying the start event.


was (Author: ljain):
[~elek] Thanks for reviewing the patch! I have uploaded v4 patch based on our 
discussion. The changes related to event can be found in classes 
RetriableDatanodeEventWatcher, SCMEvents and StorageContainerManager.

I had to add the changes in EventWatcher as well. The changes add capability of 
watching multiple events in a single watcher. The reason I had to add the 
change was because currently we have a single event type 
RETRIABLE_DATANODE_COMMAND for datanode command events which need to be 
retried. If we create multiple event watchers for the same start event and 
different completion events then we will be adding multiple handlers(via 
multiple event watchers) for the same event type. This would lead to multiple 
handlers retrying the start event.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585038#comment-16585038
 ] 

Lokesh Jain commented on HDDS-325:
--

[~elek] Thanks for reviewing the patch! I have uploaded v4 patch based on our 
discussion. The changes related to event can be found in classes 
RetriableDatanodeEventWatcher, SCMEvents and StorageContainerManager.

I had to add the changes in EventWatcher as well. The changes add capability of 
watching multiple events in a single watcher. The reason I had to add the 
change was because currently we have a single event type 
RETRIABLE_DATANODE_COMMAND for datanode command events which need to be 
retried. If we create multiple event watchers for the same start event and 
different completion events then we will be adding multiple handlers(via 
multiple event watchers) for the same event type. This would lead to multiple 
handlers retrying the start event.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.004.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Status: Open  (was: Patch Available)

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org