[jira] [Commented] (HDFS-12621) Inconsistency/confusion around ViewFileSystem.getDelagation

2017-10-19 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212240#comment-16212240
 ] 

Mohammad Kamrul Islam commented on HDFS-12621:
--

Thanks [~xkrogen]

So we agree option #2 is the last.

However, I want to pursue your proposal that was adopted in router-based 
federation. Can you please provide more concrete example (such as code/commit) 
of that. I found this JIRA (HDFS-12284) which was created to address the 
delegation token for router. 

> Inconsistency/confusion around ViewFileSystem.getDelagation 
> 
>
> Key: HDFS-12621
> URL: https://issues.apache.org/jira/browse/HDFS-12621
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> *Symptom*: 
> When a user invokes ViewFileSystem.getDelegationToken(String renewer), she 
> gets a "null". However, for any other file system, it returns a valid 
> delegation token. For a normal user, it is very confusing and it takes 
> substantial time to debug/find out an alternative.
> *Root Cause:*
>  ViewFileSystem inherits the basic implementation from 
> FileSystem.getDelegationToken() that returns "_null_". The comments in the 
> source code indicates not to use it and instead use addDelegationTokens(). 
> However, it works fine DistributedFileSystem. 
> In short, the same client call is working for hdfs:// but not for  viewfs://. 
> And there is no way of end-user to identify the root cause. This also creates 
> a lot of confusion for any service that are supposed to work for both viewfs 
> and hdfs.
> *Possible Solution*:
> _Option 1:_ Add  a LOG.warn() with reasons/alternative before returning 
> "null" in the base class.
> _Option 2:_ As done for other FS, ViewFileSystem can override the method with 
> a implementation by returning the token related to fs.defaultFS. In this 
> case, the defaultFS is something like "viewfs://..". We need to find out the 
> actual namenode and uses that to retrieve the delegation token.
> _Option 3:_ Open for suggestion ?
> *Last note:* My hunch is : there are very few users who may be using 
> viewfs:// with Kerberos. Therefore, it was not being exposed earlier.
> I'm working on a good solution. Please add your suggestion.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12621) Inconsistency/confusion around ViewFileSystem.getDelagation

2017-10-12 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16202997#comment-16202997
 ] 

Mohammad Kamrul Islam commented on HDFS-12621:
--

Thanks [~sureshms] for re-adding me.

[~xkrogen] : thanks for your comments. Follow up comments:
_addDelegationTokens_ for ViewFileSystem works fine and collects the 
appropriate tokens from child filesystem(s). But the  confusion is 
*getDelegationToken*() works for most FS but not for ViewFileSsytem. 
 
Which option do you think will be a good idea? I think option #1 could be less 
risky but at least give some message to the caller to call 
_addDelegationTokens_ instead. 



> Inconsistency/confusion around ViewFileSystem.getDelagation 
> 
>
> Key: HDFS-12621
> URL: https://issues.apache.org/jira/browse/HDFS-12621
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> *Symptom*: 
> When a user invokes ViewFileSystem.getDelegationToken(String renewer), she 
> gets a "null". However, for any other file system, it returns a valid 
> delegation token. For a normal user, it is very confusing and it takes 
> substantial time to debug/find out an alternative.
> *Root Cause:*
>  ViewFileSystem inherits the basic implementation from 
> FileSystem.getDelegationToken() that returns "_null_". The comments in the 
> source code indicates not to use it and instead use addDelegationTokens(). 
> However, it works fine DistributedFileSystem. 
> In short, the same client call is working for hdfs:// but not for  viewfs://. 
> And there is no way of end-user to identify the root cause. This also creates 
> a lot of confusion for any service that are supposed to work for both viewfs 
> and hdfs.
> *Possible Solution*:
> _Option 1:_ Add  a LOG.warn() with reasons/alternative before returning 
> "null" in the base class.
> _Option 2:_ As done for other FS, ViewFileSystem can override the method with 
> a implementation by returning the token related to fs.defaultFS. In this 
> case, the defaultFS is something like "viewfs://..". We need to find out the 
> actual namenode and uses that to retrieve the delegation token.
> _Option 3:_ Open for suggestion ?
> *Last note:* My hunch is : there are very few users who may be using 
> viewfs:// with Kerberos. Therefore, it was not being exposed earlier.
> I'm working on a good solution. Please add your suggestion.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12621) Inconsistency/confusion around ViewFileSystem.getDelagation

2017-10-09 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HDFS-12621:


 Summary: Inconsistency/confusion around 
ViewFileSystem.getDelagation 
 Key: HDFS-12621
 URL: https://issues.apache.org/jira/browse/HDFS-12621
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.3
Reporter: Mohammad Kamrul Islam


*Symptom*: 
When a user invokes ViewFileSystem.getDelegationToken(String renewer), she gets 
a "null". However, for any other file system, it returns a valid delegation 
token. For a normal user, it is very confusing and it takes substantial time to 
debug/find out an alternative.

*Root Cause:*
 ViewFileSystem inherits the basic implementation from 
FileSystem.getDelegationToken() that returns "_null_". The comments in the 
source code indicates not to use it and instead use addDelegationTokens(). 
However, it works fine DistributedFileSystem. 

In short, the same client call is working for hdfs:// but not for  viewfs://. 
And there is no way of end-user to identify the root cause. This also creates a 
lot of confusion for any service that are supposed to work for both viewfs and 
hdfs.

*Possible Solution*:

_Option 1:_ Add  a LOG.warn() with reasons/alternative before returning "null" 
in the base class.

_Option 2:_ As done for other FS, ViewFileSystem can override the method with a 
implementation by returning the token related to fs.defaultFS. In this case, 
the defaultFS is something like "viewfs://..". We need to find out the actual 
namenode and uses that to retrieve the delegation token.

_Option 3:_ Open for suggestion ?

*Last note:* My hunch is : there are very few users who may be using viewfs:// 
with Kerberos. Therefore, it was not being exposed earlier.

I'm working on a good solution. Please add your suggestion.



 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7422) TestEncryptionZonesWithKMS fails against Java 8

2014-12-04 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234918#comment-14234918
 ] 

Mohammad Kamrul Islam commented on HDFS-7422:
-

[~tedyu] i ran against JDK 1.8.0.5, it  worked fine on Linux.

Did it fail multiple times?



> TestEncryptionZonesWithKMS fails against Java 8
> ---
>
> Key: HDFS-7422
> URL: https://issues.apache.org/jira/browse/HDFS-7422
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/12/ :
> {code}
> REGRESSION:  
> org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testReadWriteUsingWebHdfs
> Error Message:
> Stream closed.
> Stack Trace:
> java.io.IOException: Stream closed.
> at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:385)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:91)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:656)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:622)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:458)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:487)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:483)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$UnresolvedUrlOpener.connect(WebHdfsFileSystem.java:1204)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:120)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:104)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.(ByteRangeInputStream.java:89)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$OffsetUrlInputStream.(WebHdfsFileSystem.java:1261)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1175)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.verifyFilesEqual(DFSTestUtil.java:1399)
> at 
> org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs(TestEncryptionZones.java:634)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException: Stream closed.
> at 
> org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:165)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:353)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:91)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:608)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:458)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:487)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.au

[jira] [Commented] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2014-10-01 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155970#comment-14155970
 ] 

Mohammad Kamrul Islam commented on HDFS-7175:
-

Patch looks good to me.

Can you please address the test case failure?


> Client-side SocketTimeoutException during Fsck
> --
>
> Key: HDFS-7175
> URL: https://issues.apache.org/jira/browse/HDFS-7175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Carl Steinbach
>Assignee: Akira AJISAKA
> Attachments: HDFS-7175.patch
>
>
> HDFS-2538 disabled status reporting for the fsck command (it can optionally 
> be enabled with the -showprogress option). We have observed that without 
> status reporting the client will abort with read timeout:
> {noformat}
> [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
> Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
> 14/09/30 06:03:41 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
> cause:java.net.SocketTimeoutException: Read timed out
> Exception in thread "main" java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.read(SocketInputStream.java:152)
>   at java.net.SocketInputStream.read(SocketInputStream.java:122)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
>   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
>   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
> {noformat}
> Since there's nothing for the client to read it will abort if the time 
> required to complete the fsck operation is longer than the client's read 
> timeout setting.
> I can think of a couple ways to fix this:
> # Set an infinite read timeout on the client side (not a good idea!).
> # Have the server-side write (and flush) zeros to the wire and instruct the 
> client to ignore these characters instead of echoing them.
> # It's possible that flushing an empty buffer on the server-side will trigger 
> an HTTP response with a zero length payload. This may be enough to keep the 
> client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-19 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14001470#comment-14001470
 ] 

Mohammad Kamrul Islam commented on HDFS-6397:
-

Looks like the test case failure is related. 
I ran it locally and it passed.

There is an existing JIRA and patch available to address this transient 
failure: https://issues.apache.org/jira/browse/HDFS-6308.



> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch, HDFS-6397.3.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-6397:


Attachment: HDFS-6397.3.patch

Thanks again [~kihwal] for your quick response.

Uploaded the catch to address the Test case failure.


> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch, HDFS-6397.3.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-6397:


Attachment: HDFS-6397.2.patch

Thanks [~kihwal] for the review and suggestion.
uploaded the new patch.

I also prefer if we can make it to 2.4.1.


> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-15 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-6397:


Attachment: HDFS-6397.1.patch

Patch uploaded

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Attachments: HDFS-6397.1.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-15 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-6397:


Status: Patch Available  (was: Open)

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Attachments: HDFS-6397.1.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-14 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HDFS-6397:
---

 Summary: NN shows inconsistent value in deadnode count 
 Key: HDFS-6397
 URL: https://issues.apache.org/jira/browse/HDFS-6397
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


Context: 
When NN is started , without any live datanode but there are nodes in the 
dfs.includes, NN shows the deadcount as '0'.

There are two inconsistencies:
1. If you click on deadnode links (which shows the count is 0), it will display 
the list of deadnodes correctly.
2.  hadoop 1.x used  to display the count correctly.

The following snippets of JMX response will explain it further:
Look at the value of "NumDeadDataNodes" 
{noformat}
 {
"name" : "Hadoop:service=NameNode,name=FSNamesystemState",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
"CapacityTotal" : 0,
"CapacityUsed" : 0,
... 
   "NumLiveDataNodes" : 0,
"NumDeadDataNodes" : 0,
"NumDecomLiveDataNodes" : 0,
"NumDecomDeadDataNodes" : 0,
"NumDecommissioningDataNodes" : 0,
"NumStaleDataNodes" : 0
  },
{noformat}
Look at " "DeadNodes"".
{noformat}
{
"name" : "Hadoop:service=NameNode,name=NameNodeInfo",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",


"TotalBlocks" : 70,
"TotalFiles" : 129,
"NumberOfMissingBlocks" : 0,
"LiveNodes" : "{}",
"DeadNodes" : 
"{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
"DecomNodes" : "{}",
   .
  }
{noformat}






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4368) Backport HDFS-3553 (hftp proxy tokens) to branch-1

2014-04-21 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976073#comment-13976073
 ] 

Mohammad Kamrul Islam commented on HDFS-4368:
-

Is there anyone working on this to close?

> Backport HDFS-3553 (hftp proxy tokens) to branch-1
> --
>
> Key: HDFS-4368
> URL: https://issues.apache.org/jira/browse/HDFS-4368
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.2
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Critical
> Attachments: HDFS-4368-branch-1.1.patch
>
>
> Proxy tokens are broken for hftp.  The impact is systems using proxy tokens, 
> such as oozie jobs, cannot use hftp.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-11 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13967118#comment-13967118
 ] 

Mohammad Kamrul Islam commented on HDFS-6180:
-

bq. The changes turn out to be a lot bigger than I anticipated. It might be 
risky to put it in at the very last moment. Moving it to a blocker of 2.5.0.


What about for the release 2.4.1? It could be coming soon.


> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.5.0
>
> Attachments: HDFS-6180.000.patch, HDFS-6180.001.patch, 
> HDFS-6180.002.patch, HDFS-6180.003.patch, HDFS-6180.004.patch, dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959193#comment-13959193
 ] 

Mohammad Kamrul Islam commented on HDFS-6180:
-

This is the right Apache way!
I'm fine with [~wheat9] to work with this.
I will stop working on this.

Looking for the patch!


> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959190#comment-13959190
 ] 

Mohammad Kamrul Islam commented on HDFS-6180:
-

Yes. We ( [~alluri] and I) also found the same code as buggy.
We are working on a  solution.
I have couple of possible options that I will propose soon for comments.


> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-02 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HDFS-6180:
---

Assignee: Mohammad Kamrul Islam

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Mohammad Kamrul Islam
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-02 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HDFS-6180:
---

Assignee: Mohammad Kamrul Islam

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Mohammad Kamrul Islam
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-02 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-6180:


Assignee: (was: Mohammad Kamrul Islam)

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-02-27 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-2538:


Attachment: HDFS-2538.3.patch

Uploaded patch based on consensus approach.
This is only for trunk and backward incompatible.

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch, HDFS-2538.2.patch, HDFS-2538.3.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-02-24 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-2538:


Status: Patch Available  (was: Open)

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch, HDFS-2538.2.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-02-24 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-2538:


Attachment: HDFS-2538.2.patch

Addressed Jakob's comment.

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch, HDFS-2538.2.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-02-21 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-2538:


Affects Version/s: (was: 1.0.0)
   (was: 0.20.204.0)
   2.2.0
   Status: Patch Available  (was: Open)

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-02-21 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-2538:


Attachment: HDFS-2538.1.patch

This is based on original patch provided by [~aw].
This is for only Hadoop 2.3+.

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HDFS-2538) option to disable fsck dots

2014-02-20 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HDFS-2538:
---

Assignee: Mohammad Kamrul Islam

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.20.204.0, 1.0.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)