[jira] [Updated] (HDFS-12934) RBF: Federation supports global quota

2017-12-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12934:
-
Attachment: HDFS-12934.001.patch

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12934.001.patch, RBF support  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12934) RBF: Federation supports global quota

2017-12-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12934:
-
Attachment: (was: HDFS-12934.001.patch)

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: RBF support  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12934) RBF: Federation supports global quota

2017-12-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12934:
-
Attachment: HDFS-12934.001.patch

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12934.001.patch, RBF support  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12934) RBF: Federation supports global quota

2017-12-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303123#comment-16303123
 ] 

Yiqun Lin commented on HDFS-12934:
--

[~elgoiri], the behavior in {{QuotaUpdateCacheService}} will be a little 
different from {{NamenodeHeartbeatService}}. It will addtionally do some mount 
table path parsing work and some stale paths removing operations. It seems not 
very easy to implement this logic into the same service. So I'd like to use a 
new service to do this work in the first phase.

I have implemented the main work.
Attach the first patch. The patch is a little big.
Plan to file other JIRAs for documentation and Quota web UI showing.
The unit tests for API {{setQuota}} and {{getQuotaUsage}} will be added in the 
next patch. These two APIs in RBF are completely different implementations.

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: RBF support  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-24 Thread hu xiaodong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302994#comment-16302994
 ] 

hu xiaodong edited comment on HDFS-12960 at 12/25/17 2:29 AM:
--

Hello, [~jojochuang] thank you for your reply.
But  why record false only when there is an AccessControlException ,  
recording nothing when other exception?


was (Author: xiaodong.hu):
Hello, [~jojochuang] thank you for your reply.
But  why record false only when there is an AccessControlException , true 
when other exception?

> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-24 Thread hu xiaodong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302994#comment-16302994
 ] 

hu xiaodong edited comment on HDFS-12960 at 12/25/17 2:15 AM:
--

Hello, [~jojochuang] thank you for your reply.
But  why record false only when there is an AccessControlException , true 
when other exception?


was (Author: xiaodong.hu):
Hollo, [~jojochuang] thank you for your reply.
But  why record false only when there is an AccessControlException , true 
when other exception?

> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-24 Thread hu xiaodong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302994#comment-16302994
 ] 

hu xiaodong commented on HDFS-12960:


Hollo, [~jojochuang] thank you for your reply.
But  why record false only when there is an AccessControlException , true 
when other exception?

> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9023) When NN is not able to identify DN for replication, reason behind it can be logged

2017-12-24 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302920#comment-16302920
 ] 

Surendra Singh Lilhore commented on HDFS-9023:
--

Thanks [~xiaochen] for the patch. V2 patch almost looks good to me.
Some minor comment..
# No need to put {{if (LOG.isDebugEnabled() && builder != null)}} check in else 
part. Already {{LOG.isDebugEnabled()}} check done in first {{if}}. So it shoud 
be like this.
{code}
  if (LOG.isDebugEnabled() && builder != null) {
detail = builder.toString();
if (badTarget) {
  builder.setLength(0);
} else {
  if (detail.length() > 1) {
// only log if there's more than "[", which is always appended at
// the beginning of this method.
LOG.debug(builder.toString());
  }
  detail = "";
}
  }
{code}
# Just give the HashMap generic paramter here.
{code}+  HashMap reasonMap = CHOOSE_RANDOM_REASONS.get();{code}
# Log message should be {{warn}}
{code}+LOG.info("Not enough replicas was chosen. Reason:{}", 
reasonMap);{code}

> When NN is not able to identify DN for replication, reason behind it can be 
> logged
> --
>
> Key: HDFS-9023
> URL: https://issues.apache.org/jira/browse/HDFS-9023
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-9023.01.patch, HDFS-9023.02.patch
>
>
> When NN is not able to identify DN for replication, reason behind it can be 
> logged (at least critical information why DNs not chosen like disk is full). 
> At present it is expected to enable debug log.
> For example the reason for below error looks like all 7 DNs are busy for data 
> writes. But at client or NN side no hint is given in the log message.
> {noformat}
> File /tmp/logs/spark/logs/application_1437051383180_0610/xyz-195_26009.tmp 
> could only be replicated to 0 nodes instead of minReplication (=1).  There 
> are 7 datanode(s) running and no node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1553)
>  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12870) Ozone: Service Discovery: REST endpoint in KSM for getServiceList

2017-12-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302913#comment-16302913
 ] 

genericqa commented on HDFS-12870:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
14s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 23 new 
+ 1 unchanged - 0 fixed = 24 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}161m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}245m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.container.TestContainerStateManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903564/HDFS-12870-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Updated] (HDFS-5750) JHLogAnalyzer#parseLogFile() should close stm upon return

2017-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-5750:
-
Description: 
stm is assigned to in.

But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.

  was:
stm is assigned to in
But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.


> JHLogAnalyzer#parseLogFile() should close stm upon return
> -
>
> Key: HDFS-5750
> URL: https://issues.apache.org/jira/browse/HDFS-5750
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> stm is assigned to in.
> But stm may point to another InputStream :
> {code}
> if(compressionClass != null) {
>   CompressionCodec codec = (CompressionCodec)
> ReflectionUtils.newInstance(compressionClass, new 
> Configuration());
>   in = codec.createInputStream(stm);
> {code}
> stm should be closed in the finally block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2017-12-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302861#comment-16302861
 ] 

Ted Yu commented on HDFS-7101:
--

TestFailureToReadEdits failure was not related to patch.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2017-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6092:
-
Status: Open  (was: Patch Available)

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, haosdent-HDFS-6092-v2.patch, 
> haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12962) Ozone: SCM: ContainerStateManager#updateContainerState updates incorrect AllocatedBytes to container info.

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12962:
---
Summary: Ozone: SCM: ContainerStateManager#updateContainerState updates 
incorrect AllocatedBytes to container info.  (was: Ozone: SCM: 
ContainerStateManager: updateContainerState updates incorrect AllocatedBytes to 
container info.)

> Ozone: SCM: ContainerStateManager#updateContainerState updates incorrect 
> AllocatedBytes to container info.
> --
>
> Key: HDFS-12962
> URL: https://issues.apache.org/jira/browse/HDFS-12962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> While updating container state through 
> {{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
> {{ContainerStateManager}} should be used, not the one from 
> {{ContainerMapping}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12962) Ozone: SCM: ContainerStateManager: updateContainerState updates incorrect AllocatedBytes to container info.

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12962:
---
Summary: Ozone: SCM: ContainerStateManager: updateContainerState updates 
incorrect AllocatedBytes to container info.  (was: Ozone: SCM: Updating 
container state {{FULL_CONTAINER}} while processing container report doesn't 
update SCM's {{container.db}})

> Ozone: SCM: ContainerStateManager: updateContainerState updates incorrect 
> AllocatedBytes to container info.
> ---
>
> Key: HDFS-12962
> URL: https://issues.apache.org/jira/browse/HDFS-12962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> While processing container report SCM will move the containers to 
> {{FULL_CONTAINER}} state if {{containerUsedPercentage >= 
> containerCloseThreshold}}, this should be updated in {{container.db}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12962) Ozone: SCM: ContainerStateManager: updateContainerState updates incorrect AllocatedBytes to container info.

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12962:
---
Description: While updating container state through 
{{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
{{ContainerStateManager}} should be used, not the one from 
{{ContainerMapping}}.  (was: While updating container state through 
{{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
{{ContainerStateManager}} should be used, not the one from {{ContainerMapping}})

> Ozone: SCM: ContainerStateManager: updateContainerState updates incorrect 
> AllocatedBytes to container info.
> ---
>
> Key: HDFS-12962
> URL: https://issues.apache.org/jira/browse/HDFS-12962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> While updating container state through 
> {{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
> {{ContainerStateManager}} should be used, not the one from 
> {{ContainerMapping}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12962) Ozone: SCM: ContainerStateManager: updateContainerState updates incorrect AllocatedBytes to container info.

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12962:
---
Description: While updating container state through 
{{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
{{ContainerStateManager}} should be used, not the one from {{ContainerMapping}} 
 (was: While processing container report SCM will move the containers to 
{{FULL_CONTAINER}} state if {{containerUsedPercentage >= 
containerCloseThreshold}}, this should be updated in {{container.db}} as well.)

> Ozone: SCM: ContainerStateManager: updateContainerState updates incorrect 
> AllocatedBytes to container info.
> ---
>
> Key: HDFS-12962
> URL: https://issues.apache.org/jira/browse/HDFS-12962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> While updating container state through 
> {{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
> {{ContainerStateManager}} should be used, not the one from 
> {{ContainerMapping}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12962) Ozone: SCM: Updating container state {{FULL_CONTAINER}} while processing container report doesn't update SCM's {{container.db}}

2017-12-24 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12962:
--

 Summary: Ozone: SCM: Updating container state {{FULL_CONTAINER}} 
while processing container report doesn't update SCM's {{container.db}}
 Key: HDFS-12962
 URL: https://issues.apache.org/jira/browse/HDFS-12962
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


While processing container report SCM will move the containers to 
{{FULL_CONTAINER}} state if {{containerUsedPercentage >= 
containerCloseThreshold}}, this should be updated in {{container.db}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12958) Ozone: remove setAllocatedBytes method in ContainerInfo

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12958:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Ozone: remove setAllocatedBytes method in ContainerInfo
> ---
>
> Key: HDFS-12958
> URL: https://issues.apache.org/jira/browse/HDFS-12958
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-12958-HDFS-7240.001.patch
>
>
> We may want to remove {{setAllocatedBytes}} method from {{ContainerInfo}} and 
> we keep all fields of {{ContainerInfo}} immutable, such that client won't 
> accidentally change {{ContainerInfo}} and rely on the changed instance.
> An alternative of having {{setAllocatedBytes}} is to always create a new 
> {{ContainerInfo}} instance whenever it needs to be changed.
> This is based on [this 
> comment|https://issues.apache.org/jira/browse/HDFS-12751?focusedCommentId=16299750=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16299750]
>  from HDFS-12751.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12958) Ozone: remove setAllocatedBytes method in ContainerInfo

2017-12-24 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302845#comment-16302845
 ] 

Nanda kumar commented on HDFS-12958:


I have committed it to the feature branch. Thanks [~vagarychen] for the 
contribution.

> Ozone: remove setAllocatedBytes method in ContainerInfo
> ---
>
> Key: HDFS-12958
> URL: https://issues.apache.org/jira/browse/HDFS-12958
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-12958-HDFS-7240.001.patch
>
>
> We may want to remove {{setAllocatedBytes}} method from {{ContainerInfo}} and 
> we keep all fields of {{ContainerInfo}} immutable, such that client won't 
> accidentally change {{ContainerInfo}} and rely on the changed instance.
> An alternative of having {{setAllocatedBytes}} is to always create a new 
> {{ContainerInfo}} instance whenever it needs to be changed.
> This is based on [this 
> comment|https://issues.apache.org/jira/browse/HDFS-12751?focusedCommentId=16299750=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16299750]
>  from HDFS-12751.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12958) Ozone: remove setAllocatedBytes method in ContainerInfo

2017-12-24 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302840#comment-16302840
 ] 

Nanda kumar commented on HDFS-12958:


Thanks [~vagarychen] for filing and working on this jira. +1, the change looks 
good to me.
Test failures and findbugs warning are not related, I will commit this shortly.

> Ozone: remove setAllocatedBytes method in ContainerInfo
> ---
>
> Key: HDFS-12958
> URL: https://issues.apache.org/jira/browse/HDFS-12958
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HDFS-12958-HDFS-7240.001.patch
>
>
> We may want to remove {{setAllocatedBytes}} method from {{ContainerInfo}} and 
> we keep all fields of {{ContainerInfo}} immutable, such that client won't 
> accidentally change {{ContainerInfo}} and rely on the changed instance.
> An alternative of having {{setAllocatedBytes}} is to always create a new 
> {{ContainerInfo}} instance whenever it needs to be changed.
> This is based on [this 
> comment|https://issues.apache.org/jira/browse/HDFS-12751?focusedCommentId=16299750=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16299750]
>  from HDFS-12751.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12870) Ozone: Service Discovery: REST endpoint in KSM for getServiceList

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12870:
---
Status: Patch Available  (was: Open)

> Ozone: Service Discovery: REST endpoint in KSM for getServiceList
> -
>
> Key: HDFS-12870
> URL: https://issues.apache.org/jira/browse/HDFS-12870
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12870-HDFS-7240.000.patch
>
>
> A new REST call to be added in KSM which will return the list of Services 
> that are there in Ozone cluster, this will be used by OzoneClient for 
> establishing the connection.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12870) Ozone: Service Discovery: REST endpoint in KSM for getServiceList

2017-12-24 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12870:
---
Attachment: HDFS-12870-HDFS-7240.000.patch

> Ozone: Service Discovery: REST endpoint in KSM for getServiceList
> -
>
> Key: HDFS-12870
> URL: https://issues.apache.org/jira/browse/HDFS-12870
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12870-HDFS-7240.000.patch
>
>
> A new REST call to be added in KSM which will return the list of Services 
> that are there in Ozone cluster, this will be used by OzoneClient for 
> establishing the connection.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org