[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2018-09-26 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629503#comment-16629503
 ] 

Ted Yu commented on HDFS-6092:
--

Test failure was not related.

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, HDFS-6092-v5.patch, 
> haosdent-HDFS-6092-v2.patch, haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, 
> hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2018-09-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16618259#comment-16618259
 ] 

Ted Yu commented on HDFS-6092:
--

Test failure was not related.

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, HDFS-6092-v5.patch, 
> haosdent-HDFS-6092-v2.patch, haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, 
> hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-09-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471170#comment-16471170
 ] 

Ted Yu edited comment on HDFS-13515 at 9/17/18 11:01 PM:
-

Can you log the remote address in case of exception ?

Thanks


was (Author: yuzhih...@gmail.com):
Can you log the remote address in case of exception?

Thanks

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-08-02 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471170#comment-16471170
 ] 

Ted Yu edited comment on HDFS-13515 at 8/2/18 8:27 PM:
---

Can you log the remote address in case of exception?

Thanks


was (Author: yuzhih...@gmail.com):
Can you log the remote address in case of exception ?

Thanks

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-06-12 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509677#comment-16509677
 ] 

Ted Yu commented on HDFS-13511:
---

Can this be merged into 3.1 branch ?

thanks

> Provide specialized exception when block length cannot be obtained
> --
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch, 
> HDFS-13511.003.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null && 
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~ste...@apache.org], better approach is to introduce 
> specialized IOException, e.g. CannotObtainBlockLengthException so that 
> downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2018-05-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482132#comment-16482132
 ] 

Ted Yu commented on HDFS-6092:
--

Test failure was not related.

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, HDFS-6092-v5.patch, 
> haosdent-HDFS-6092-v2.patch, haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, 
> hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-05-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471170#comment-16471170
 ] 

Ted Yu edited comment on HDFS-13515 at 5/21/18 2:42 AM:


Can you log the remote address in case of exception ?

Thanks


was (Author: yuzhih...@gmail.com):
Can you log the remote address in case of exception ?

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-05-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471170#comment-16471170
 ] 

Ted Yu commented on HDFS-13515:
---

Can you log the remote address in case of exception ?

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-04-28 Thread Ted Yu (JIRA)
Ted Yu created HDFS-13515:
-

 Summary: NetUtils#connect should log remote address for 
NoRouteToHostException
 Key: HDFS-13515
 URL: https://issues.apache.org/jira/browse/HDFS-13515
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu


{code}
hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
{code}
In the above stack trace, the remote host was not logged.
This makes troubleshooting a bit hard.

NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-04-27 Thread Ted Yu (JIRA)
Ted Yu created HDFS-13511:
-

 Summary: Provide specialized exception when block length cannot be 
obtained
 Key: HDFS-13511
 URL: https://issues.apache.org/jira/browse/HDFS-13511
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu


In downstream project, I saw the following code:
{code}
FSDataInputStream inputStream = hdfs.open(new Path(path));
...
if (options.getRecoverFailedOpen() && dfs != null && 
e.getMessage().toLowerCase()
.startsWith("cannot obtain block length for")) {
{code}
The above tightly depends on the following in DFSInputStream#readBlockLength
{code}
throw new IOException("Cannot obtain block length for " + locatedblock);
{code}
The check based on string matching is brittle in production deployment.

After discussing with [~ste...@apache.org], better approach is to introduce 
specialized IOException, e.g. CannotObtainBlockLengthException so that 
downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7101) Potential null dereference in DFSck#doWork()

2018-04-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368441#comment-16368441
 ] 

Ted Yu edited comment on HDFS-7101 at 4/10/18 1:03 AM:
---

More review, please .


was (Author: yuzhih...@gmail.com):
More review, please.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2018-03-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408652#comment-16408652
 ] 

Ted Yu commented on HDFS-12574:
---

See if you have recommendation on how the following code can be formulated 
using Public APIs:

https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java#L229

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, 
> HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, 
> HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, 
> HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, 
> HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, 
> HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, 
> HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, 
> HDFS-12574.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2018-03-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408628#comment-16408628
 ] 

Ted Yu commented on HDFS-12574:
---

Is it possible to expose decryptEncryptedDataEncryptionKey in a 
@InterfaceAudience.Public class so that downstream project(s) can use it ?

thanks

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, 
> HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, 
> HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, 
> HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, 
> HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, 
> HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, 
> HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, 
> HDFS-12574.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2018-03-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408590#comment-16408590
 ] 

Ted Yu commented on HDFS-12574:
---

hbase 2.0 needs to call HdfsKMSUtil#decryptEncryptedDataEncryptionKey (through 
reflection).

Is it possible to add annotation / comment to the method so that the method is 
stable for future hadoop releases ?

Thanks

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, 
> HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, 
> HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, 
> HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, 
> HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, 
> HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, 
> HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, 
> HDFS-12574.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13206) IllegalStateException: Unable to finalize edits file

2018-02-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380902#comment-16380902
 ] 

Ted Yu commented on HDFS-13206:
---

{code}
devtmpfs   65906460 0  65906460   0% /dev
tmpfs  65914268 0  65914268   0% /dev/shm
{code}

> IllegalStateException: Unable to finalize edits file
> 
>
> Key: HDFS-13206
> URL: https://issues.apache.org/jira/browse/HDFS-13206
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Priority: Minor
> Attachments: testFavoredNodeTableImport-output.txt
>
>
> I noticed the following in hbase test output running against hadoop3:
> {code}
> 2018-02-28 18:40:18,491 ERROR [Time-limited test] namenode.JournalSet(402): 
> Error: finalize log segment 1, 658 failed for (journal 
> JournalAndStream(mgr=FileJournalManager(root=/mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1),
>  stream=null))
> java.lang.IllegalStateException: Unable to finalize edits file 
> /mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1/current/edits_inprogress_001
>   at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:153)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:224)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1427)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:398)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:110)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1320)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1909)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:1013)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.stopAndJoinNameNode(MiniDFSCluster.java:2047)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1987)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1958)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1951)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:767)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1109)
>   at 
> org.apache.hadoop.hbase.master.balancer.TestFavoredNodeTableImport.stopCluster(TestFavoredNodeTableImport.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13206) IllegalStateException: Unable to finalize edits file

2018-02-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380829#comment-16380829
 ] 

Ted Yu commented on HDFS-13206:
---

I don't think that was the case.

I used {{df}} command on the Linux where the test was run.
No partition was nearly half used.

> IllegalStateException: Unable to finalize edits file
> 
>
> Key: HDFS-13206
> URL: https://issues.apache.org/jira/browse/HDFS-13206
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Priority: Minor
> Attachments: testFavoredNodeTableImport-output.txt
>
>
> I noticed the following in hbase test output running against hadoop3:
> {code}
> 2018-02-28 18:40:18,491 ERROR [Time-limited test] namenode.JournalSet(402): 
> Error: finalize log segment 1, 658 failed for (journal 
> JournalAndStream(mgr=FileJournalManager(root=/mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1),
>  stream=null))
> java.lang.IllegalStateException: Unable to finalize edits file 
> /mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1/current/edits_inprogress_001
>   at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:153)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:224)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1427)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:398)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:110)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1320)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1909)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:1013)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.stopAndJoinNameNode(MiniDFSCluster.java:2047)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1987)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1958)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1951)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:767)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1109)
>   at 
> org.apache.hadoop.hbase.master.balancer.TestFavoredNodeTableImport.stopCluster(TestFavoredNodeTableImport.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13206) IllegalStateException: Unable to finalize edits file

2018-02-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-13206:
--
Attachment: testFavoredNodeTableImport-output.txt

> IllegalStateException: Unable to finalize edits file
> 
>
> Key: HDFS-13206
> URL: https://issues.apache.org/jira/browse/HDFS-13206
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Priority: Minor
> Attachments: testFavoredNodeTableImport-output.txt
>
>
> I noticed the following in hbase test output running against hadoop3:
> {code}
> 2018-02-28 18:40:18,491 ERROR [Time-limited test] namenode.JournalSet(402): 
> Error: finalize log segment 1, 658 failed for (journal 
> JournalAndStream(mgr=FileJournalManager(root=/mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1),
>  stream=null))
> java.lang.IllegalStateException: Unable to finalize edits file 
> /mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1/current/edits_inprogress_001
>   at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:153)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:224)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1427)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:398)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:110)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1320)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1909)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:1013)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.stopAndJoinNameNode(MiniDFSCluster.java:2047)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1987)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1958)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1951)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:767)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1109)
>   at 
> org.apache.hadoop.hbase.master.balancer.TestFavoredNodeTableImport.stopCluster(TestFavoredNodeTableImport.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13206) IllegalStateException: Unable to finalize edits file

2018-02-28 Thread Ted Yu (JIRA)
Ted Yu created HDFS-13206:
-

 Summary: IllegalStateException: Unable to finalize edits file
 Key: HDFS-13206
 URL: https://issues.apache.org/jira/browse/HDFS-13206
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ted Yu


I noticed the following in hbase test output running against hadoop3:
{code}
2018-02-28 18:40:18,491 ERROR [Time-limited test] namenode.JournalSet(402): 
Error: finalize log segment 1, 658 failed for (journal 
JournalAndStream(mgr=FileJournalManager(root=/mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1),
 stream=null))
java.lang.IllegalStateException: Unable to finalize edits file 
/mnt/disk2/a/2-hbase/hbase-server/target/test-data/5670112c-31f1-43b0-af31-c1182e142e63/cluster_8f993609-c3a1-4fb4-8b3d-0e642261deb1/dfs/name-0-1/current/edits_inprogress_001
  at 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:153)
  at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:224)
  at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
  at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:219)
  at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1427)
  at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:398)
  at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:110)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1320)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1909)
  at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:1013)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.stopAndJoinNameNode(MiniDFSCluster.java:2047)
  at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1987)
  at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1958)
  at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1951)
  at 
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:767)
  at 
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:1109)
  at 
org.apache.hadoop.hbase.master.balancer.TestFavoredNodeTableImport.stopCluster(TestFavoredNodeTableImport.java:71)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2018-02-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6092:
-
Attachment: HDFS-6092-v5.patch

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, HDFS-6092-v5.patch, 
> haosdent-HDFS-6092-v2.patch, haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, 
> hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2018-02-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6092:
-
Assignee: Ted Yu
  Status: Patch Available  (was: Open)

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, HDFS-6092-v5.patch, 
> haosdent-HDFS-6092-v2.patch, haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, 
> hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2018-02-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368441#comment-16368441
 ] 

Ted Yu commented on HDFS-7101:
--

More review, please.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5750) JHLogAnalyzer#parseLogFile() should close stm upon return

2017-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-5750:
-
Description: 
stm is assigned to in.

But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.

  was:
stm is assigned to in
But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.


> JHLogAnalyzer#parseLogFile() should close stm upon return
> -
>
> Key: HDFS-5750
> URL: https://issues.apache.org/jira/browse/HDFS-5750
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> stm is assigned to in.
> But stm may point to another InputStream :
> {code}
> if(compressionClass != null) {
>   CompressionCodec codec = (CompressionCodec)
> ReflectionUtils.newInstance(compressionClass, new 
> Configuration());
>   in = codec.createInputStream(stm);
> {code}
> stm should be closed in the finally block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2017-12-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302861#comment-16302861
 ] 

Ted Yu commented on HDFS-7101:
--

TestFailureToReadEdits failure was not related to patch.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2017-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6092:
-
Status: Open  (was: Patch Available)

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, haosdent-HDFS-6092-v2.patch, 
> haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7101) Potential null dereference in DFSck#doWork()

2017-12-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124972#comment-15124972
 ] 

Ted Yu edited comment on HDFS-7101 at 12/14/17 1:53 AM:


Agreed.


was (Author: yuzhih...@gmail.com):
Agree.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5012) replica.getGenerationStamp() may be >= recoveryId

2017-11-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-5012.
--
Resolution: Cannot Reproduce

> replica.getGenerationStamp() may be >= recoveryId
> -
>
> Key: HDFS-5012
> URL: https://issues.apache.org/jira/browse/HDFS-5012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Ted Yu
> Attachments: testReplicationQueueFailover.txt
>
>
> The following was first observed by [~jdcryans] in 
> TestReplicationQueueFailover running against 2.0.5-alpha:
> {code}
> 2013-07-16 17:14:33,340 ERROR [IPC Server handler 7 on 35081] 
> security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user 
> (auth:SIMPLE) cause:java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> 2013-07-16 17:14:33,341 WARN  
> [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] 
> datanode.DataNode(1894): Failed to obtain replica info for block 
> (=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041) from 
> datanode (=127.0.0.1:47006)
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5750) JHLogAnalyzer#parseLogFile() should close stm upon return

2017-11-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-5750:
-
Description: 
stm is assigned to in
But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.

  was:
stm is assigned to in
But stm may point to another InputStream :

{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.


> JHLogAnalyzer#parseLogFile() should close stm upon return
> -
>
> Key: HDFS-5750
> URL: https://issues.apache.org/jira/browse/HDFS-5750
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> stm is assigned to in
> But stm may point to another InputStream :
> {code}
> if(compressionClass != null) {
>   CompressionCodec codec = (CompressionCodec)
> ReflectionUtils.newInstance(compressionClass, new 
> Configuration());
>   in = codec.createInputStream(stm);
> {code}
> stm should be closed in the finally block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5750) JHLogAnalyzer#parseLogFile() should close stm upon return

2017-10-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-5750:
-
Description: 
stm is assigned to in
But stm may point to another InputStream :

{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.

  was:
stm is assigned to in
But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.


> JHLogAnalyzer#parseLogFile() should close stm upon return
> -
>
> Key: HDFS-5750
> URL: https://issues.apache.org/jira/browse/HDFS-5750
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> stm is assigned to in
> But stm may point to another InputStream :
> {code}
> if(compressionClass != null) {
>   CompressionCodec codec = (CompressionCodec)
> ReflectionUtils.newInstance(compressionClass, new 
> Configuration());
>   in = codec.createInputStream(stm);
> {code}
> stm should be closed in the finally block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5012) replica.getGenerationStamp() may be >= recoveryId

2017-10-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214331#comment-16214331
 ] 

Ted Yu commented on HDFS-5012:
--

Planning to resolve this since there has been no repro.

> replica.getGenerationStamp() may be >= recoveryId
> -
>
> Key: HDFS-5012
> URL: https://issues.apache.org/jira/browse/HDFS-5012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Ted Yu
> Attachments: testReplicationQueueFailover.txt
>
>
> The following was first observed by [~jdcryans] in 
> TestReplicationQueueFailover running against 2.0.5-alpha:
> {code}
> 2013-07-16 17:14:33,340 ERROR [IPC Server handler 7 on 35081] 
> security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user 
> (auth:SIMPLE) cause:java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> 2013-07-16 17:14:33,341 WARN  
> [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] 
> datanode.DataNode(1894): Failed to obtain replica info for block 
> (=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041) from 
> datanode (=127.0.0.1:47006)
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-4796) Port HDFS-4721 'Speed up lease/block recovery when DN fails and a block goes into recovery' to branch 1

2017-10-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-4796.
--
Resolution: Won't Fix

> Port HDFS-4721 'Speed up lease/block recovery when DN fails and a block goes 
> into recovery' to branch 1
> ---
>
> Key: HDFS-4796
> URL: https://issues.apache.org/jira/browse/HDFS-4796
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> To speedup lease/block recovery, we always choose the datanode with the most 
> recent heartbeat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5834) TestCheckpoint#testCheckpoint may fail due to Bad value assertion

2017-10-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-5834.
--
Resolution: Cannot Reproduce

> TestCheckpoint#testCheckpoint may fail due to Bad value assertion
> -
>
> Key: HDFS-5834
> URL: https://issues.apache.org/jira/browse/HDFS-5834
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> I saw the following when running test suite on Linux:
> {code}
> testCheckpoint(org.apache.hadoop.hdfs.server.namenode.TestCheckpoint)  Time 
> elapsed: 3.058 sec  <<< FAILURE!
> java.lang.AssertionError: Bad value for metric GetImageNumOps
> Expected: gt(0)
>  got: <0L>
> at org.junit.Assert.assertThat(Assert.java:780)
> at 
> org.apache.hadoop.test.MetricsAsserts.assertCounterGt(MetricsAsserts.java:318)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.testCheckpoint(TestCheckpoint.java:1058)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5831) TestAuditLogs#testAuditAllowedStat sometimes fails in trunk

2017-10-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-5831.
--
Resolution: Cannot Reproduce

> TestAuditLogs#testAuditAllowedStat sometimes fails in trunk
> ---
>
> Key: HDFS-5831
> URL: https://issues.apache.org/jira/browse/HDFS-5831
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 
> 5831-org.apache.hadoop.hdfs.server.namenode.TestAuditLogs-output.txt
>
>
> Running TestAuditLogs on Linux, I got:
> {code}
> testAuditAllowedStat[1](org.apache.hadoop.hdfs.server.namenode.TestAuditLogs) 
>  Time elapsed: 6.677 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertNotNull(Assert.java:526)
> at org.junit.Assert.assertNotNull(Assert.java:537)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.verifyAuditLogsRepeat(TestAuditLogs.java:312)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.verifyAuditLogs(TestAuditLogs.java:295)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.testAuditAllowedStat(TestAuditLogs.java:163)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5718) TestHttpsFileSystem intermittently fails with Port in use error

2017-10-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-5718.
--
Resolution: Cannot Reproduce

> TestHttpsFileSystem intermittently fails with Port in use error
> ---
>
> Key: HDFS-5718
> URL: https://issues.apache.org/jira/browse/HDFS-5718
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1634/testReport/junit/org.apache.hadoop.hdfs.web/TestHttpsFileSystem/org_apache_hadoop_hdfs_web_TestHttpsFileSystem/
>  :
> {code}
> java.net.BindException: Port in use: localhost:50475
>   at java.net.PlainSocketImpl.socketBind(Native Method)
>   at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383)
>   at java.net.ServerSocket.bind(ServerSocket.java:328)
>   at java.net.ServerSocket.(ServerSocket.java:194)
>   at javax.net.ssl.SSLServerSocket.(SSLServerSocket.java:106)
>   at 
> com.sun.net.ssl.internal.ssl.SSLServerSocketImpl.(SSLServerSocketImpl.java:108)
>   at 
> com.sun.net.ssl.internal.ssl.SSLServerSocketFactoryImpl.createServerSocket(SSLServerSocketFactoryImpl.java:72)
>   at 
> org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:478)
>   at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
>   at org.apache.hadoop.http.HttpServer.openListeners(HttpServer.java:973)
>   at org.apache.hadoop.http.HttpServer.start(HttpServer.java:914)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:412)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:769)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:315)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1846)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1746)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1203)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:673)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:342)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:323)
>   at 
> org.apache.hadoop.hdfs.web.TestHttpsFileSystem.setUp(TestHttpsFileSystem.java:64)
> {code}
> This could have been caused by concurrent test(s).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-12599:
--
Attachment: HDFS-12599.v1.patch

> Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class
> ---
>
> Key: HDFS-12599
> URL: https://issues.apache.org/jira/browse/HDFS-12599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: HDFS-12599.v1.patch, HDFS-12599.v1.patch, 
> HDFS-12599.v1.patch
>
>
> HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
> brought dependency on mockito back into DataNodeTestUtils
> Downstream, this resulted in:
> {code}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-12599:
--
Attachment: HDFS-12599.v1.patch

Rerun QA.

> Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class
> ---
>
> Key: HDFS-12599
> URL: https://issues.apache.org/jira/browse/HDFS-12599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: HDFS-12599.v1.patch, HDFS-12599.v1.patch
>
>
> HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
> brought dependency on mockito back into DataNodeTestUtils
> Downstream, this resulted in:
> {code}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193960#comment-16193960
 ] 

Ted Yu commented on HDFS-12599:
---

Don't think the failed tests were related to the patch.

Ran TestDFSStripedOutputStream\* locally which passed.
{code}
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
Tests run: 16, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 497.37 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
{code}
The above test needs some speedup.

> Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class
> ---
>
> Key: HDFS-12599
> URL: https://issues.apache.org/jira/browse/HDFS-12599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: HDFS-12599.v1.patch
>
>
> HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
> brought dependency on mockito back into DataNodeTestUtils
> Downstream, this resulted in:
> {code}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-12599:
--
Assignee: Ted Yu
  Status: Patch Available  (was: Open)

> Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class
> ---
>
> Key: HDFS-12599
> URL: https://issues.apache.org/jira/browse/HDFS-12599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: HDFS-12599.v1.patch
>
>
> HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
> brought dependency on mockito back into DataNodeTestUtils
> Downstream, this resulted in:
> {code}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-12599:
--
Attachment: HDFS-12599.v1.patch

> Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class
> ---
>
> Key: HDFS-12599
> URL: https://issues.apache.org/jira/browse/HDFS-12599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HDFS-12599.v1.patch
>
>
> HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
> brought dependency on mockito back into DataNodeTestUtils
> Downstream, this resulted in:
> {code}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-05 Thread Ted Yu (JIRA)
Ted Yu created HDFS-12599:
-

 Summary: Move DataNodeTestUtils.mockDatanodeBlkPinning into mock 
test util class
 Key: HDFS-12599
 URL: https://issues.apache.org/jira/browse/HDFS-12599
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu


HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
brought dependency on mockito back into DataNodeTestUtils

Downstream, this resulted in:
{code}
java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
  at org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
  at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
  at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
  at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
  at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
  at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12599) Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class

2017-10-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193492#comment-16193492
 ] 

Ted Yu commented on HDFS-12599:
---

How about moving the method to 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/InternalDataNodeTestUtils.java
 (which depends on mockito) ?

[~ste...@apache.org] [~elserj]

> Move DataNodeTestUtils.mockDatanodeBlkPinning into mock test util class
> ---
>
> Key: HDFS-12599
> URL: https://issues.apache.org/jira/browse/HDFS-12599
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>
> HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which 
> brought dependency on mockito back into DataNodeTestUtils
> Downstream, this resulted in:
> {code}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11214) Upgrade netty-all to 4.1.1.Final

2017-01-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-11214:
--
Attachment: HDFS-11214.v7.patch

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HDFS-11214
> URL: https://issues.apache.org/jira/browse/HDFS-11214
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Ted Yu
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HDFS-11214.v7.patch
>
>
> Upgrade Netty
> this is a clone of HADOOP-13866, created to kick off yetus on HDFS, that 
> being where netty is used



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2016-10-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617127#comment-15617127
 ] 

Ted Yu commented on HDFS-6092:
--

Patch v4 has gone stale.

> DistributedFileSystem#getCanonicalServiceName() and 
> DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
> --
>
> Key: HDFS-6092
> URL: https://issues.apache.org/jira/browse/HDFS-6092
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6092-v4.patch, haosdent-HDFS-6092-v2.patch, 
> haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, hdfs-6092-v2.txt, hdfs-6092-v3.txt
>
>
> I discovered this when working on HBASE-10717
> Here is sample code to reproduce the problem:
> {code}
> Path desPath = new Path("hdfs://127.0.0.1/");
> FileSystem desFs = desPath.getFileSystem(conf);
> 
> String s = desFs.getCanonicalServiceName();
> URI uri = desFs.getUri();
> {code}
> Canonical name string contains the default port - 8020
> But uri doesn't contain port.
> This would result in the following exception:
> {code}
> testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
> 0.001 sec  <<< ERROR!
> java.lang.IllegalArgumentException: port out of range:-1
> at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
> at java.net.InetSocketAddress.(InetSocketAddress.java:224)
> at 
> org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
> {code}
> Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7567) Potential null dereference in FSEditLogLoader#applyEditLogOp()

2016-08-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15424817#comment-15424817
 ] 

Ted Yu commented on HDFS-7567:
--

Thanks for the link, [~jojochuang]

> Potential null dereference in FSEditLogLoader#applyEditLogOp()
> --
>
> Key: HDFS-7567
> URL: https://issues.apache.org/jira/browse/HDFS-7567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7567.patch
>
>
> {code}
>   INodeFile oldFile = INodeFile.valueOf(iip.getLastINode(), path, true);
>   if (oldFile != null && addCloseOp.overwrite) {
> ...
>   INodeFile newFile = oldFile;
> ...
>   // Update the salient file attributes.
>   newFile.setAccessTime(addCloseOp.atime, Snapshot.CURRENT_STATE_ID);
>   newFile.setModificationTime(addCloseOp.mtime, 
> Snapshot.CURRENT_STATE_ID);
> {code}
> The last two lines are not protected by null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8548) Minicluster throws NPE on shutdown

2016-06-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314619#comment-15314619
 ] 

Ted Yu commented on HDFS-8548:
--

Can this be backported to 2.7 branch ?

Thanks

> Minicluster throws NPE on shutdown
> --
>
> Key: HDFS-8548
> URL: https://issues.apache.org/jira/browse/HDFS-8548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Mike Drob
>Assignee: Surendra Singh Lilhore
>  Labels: reviewed
> Fix For: 2.8.0
>
> Attachments: HDFS-8548.patch
>
>
> FtAfter running Solr tests, when we attempt to shut down the mini cluster 
> that we use for our unit tests, we get an NPE in the clean up thread. The 
> test still completes normally, but this generates a lot of extra noise.
> {noformat}
>[junit4]   2> java.lang.reflect.InvocationTargetException
>[junit4]   2>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>[junit4]   2>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]   2>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]   2>  at java.lang.reflect.Method.invoke(Method.java:497)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
>[junit4]   2>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
>[junit4]   2>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
>[junit4]   2>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
>[junit4]   2>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
>[junit4]   2>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
>[junit4]   2>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:227)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:212)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:461)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
>[junit4]   2>  at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
>[junit4]   2>  at 
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:145)
>[junit4]   2>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:822)
>[junit4]   2>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1720)
>[junit4]   2>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1699)
>[junit4]   2>  at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.teardownClass(HdfsTestUtil.java:197)
>[junit4]   2>  at 
> org.apache.solr.core.HdfsDirectoryFactoryTest.teardownClass(HdfsDirectoryFactoryTest.java:67)
>[junit4]   2>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>[junit4]   2>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]   2>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  

[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2016-01-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124972#comment-15124972
 ] 

Ted Yu commented on HDFS-7101:
--

Agree.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support "unbuffer"

2016-01-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15114065#comment-15114065
 ] 

Ted Yu commented on HDFS-7694:
--

[~cmccabe] [~djp]:
Patch applies on branch-2.6 cleanly.
Can you commit to branch-2.6 ?

This would benefit HBASE-9393

Thanks

> FSDataInputStream should support "unbuffer"
> ---
>
> Key: HDFS-7694
> URL: https://issues.apache.org/jira/browse/HDFS-7694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch, 
> HDFS-7694.003.patch, HDFS-7694.004.patch, HDFS-7694.005.patch
>
>
> For applications that have many open HDFS (or other Hadoop filesystem) files, 
> it would be useful to have an API to clear readahead buffers and sockets.  
> This could be added to the existing APIs as an optional interface, in much 
> the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support "unbuffer"

2016-01-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15114069#comment-15114069
 ] 

Ted Yu commented on HDFS-7694:
--

[~cmccabe] [~djp]:
Patch applies on branch-2.6 cleanly.
Can you commit to branch-2.6 ?

This would benefit HBASE-9393

Thanks

> FSDataInputStream should support "unbuffer"
> ---
>
> Key: HDFS-7694
> URL: https://issues.apache.org/jira/browse/HDFS-7694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch, 
> HDFS-7694.003.patch, HDFS-7694.004.patch, HDFS-7694.005.patch
>
>
> For applications that have many open HDFS (or other Hadoop filesystem) files, 
> it would be useful to have an API to clear readahead buffers and sockets.  
> This could be added to the existing APIs as an optional interface, in much 
> the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7101) Potential null dereference in DFSck#doWork()

2016-01-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7101:
-
Attachment: HDFS-7101.v1.patch

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support "unbuffer"

2016-01-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15113588#comment-15113588
 ] 

Ted Yu commented on HDFS-7694:
--

Is there compatibility concern for backporting this to 2.6 branch ?

Thanks

> FSDataInputStream should support "unbuffer"
> ---
>
> Key: HDFS-7694
> URL: https://issues.apache.org/jira/browse/HDFS-7694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch, 
> HDFS-7694.003.patch, HDFS-7694.004.patch, HDFS-7694.005.patch
>
>
> For applications that have many open HDFS (or other Hadoop filesystem) files, 
> it would be useful to have an API to clear readahead buffers and sockets.  
> This could be added to the existing APIs as an optional interface, in much 
> the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2016-01-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090348#comment-15090348
 ] 

Ted Yu commented on HDFS-7101:
--

Looks like the patch needs to be updated.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2015-10-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6481:
-
Description: 
Ian Brooks reported the following stack trace:
{code}
2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
/user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
 block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
syncer encountered error, will retry. txid=211
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at 

[jira] [Commented] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933894#comment-14933894
 ] 

Ted Yu commented on HDFS-6264:
--

I don't think removing deprecation would result in the following test failure:
{code}
testGlobStatusFilterWithMultiplePathWildcardsAndNonTrivialFilter(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
  Time elapsed: 0.02 sec  <<< ERROR!
java.lang.NullPointerException: null
at org.apache.hadoop.fs.Globber.glob(Globber.java:145)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1688)
at 
org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusFilterWithMultiplePathWildcardsAndNonTrivialFilter(FSMainOperationsBaseTest.java:624)
{code}

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt, hdfs-6264-v3.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933904#comment-14933904
 ] 

Ted Yu commented on HDFS-6264:
--

I ran TestIPC, TestHDFSContractCreate and TestStorageRestore locally with patch 
which passed.

TestNativeAzureFileSystemOperationsMocked fails without patch.

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt, hdfs-6264-v3.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9169) TestNativeAzureFileSystemOperationsMocked fails in trunk

2015-09-28 Thread Ted Yu (JIRA)
Ted Yu created HDFS-9169:


 Summary: TestNativeAzureFileSystemOperationsMocked fails in trunk
 Key: HDFS-9169
 URL: https://issues.apache.org/jira/browse/HDFS-9169
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


When working on HDFS-6264, QA bot reported the following:
{code}
testGlobStatusFilterWithMultiplePathWildcardsAndNonTrivialFilter(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked)
  Time elapsed: 0.02 sec  <<< ERROR!
java.lang.NullPointerException: null
at org.apache.hadoop.fs.Globber.glob(Globber.java:145)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1688)
at 
org.apache.hadoop.fs.FSMainOperationsBaseTest.testGlobStatusFilterWithMultiplePathWildcardsAndNonTrivialFilter(FSMainOp
{code}
On hadoop trunk branch, the above can be reproduced without any patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6264:
-
Attachment: hdfs-6264-v3.txt

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt, hdfs-6264-v3.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909349#comment-14909349
 ] 

Ted Yu commented on HDFS-6264:
--

w.r.t. the two checkstyle warnings, the count didn't change with vs. without 
the patch.

[~kihwal]: Mind taking one more look ?

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6264:
-
Attachment: hdfs-6264-v2.txt

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907397#comment-14907397
 ] 

Ted Yu commented on HDFS-6264:
--

[~kihwal]:
Is there anything I can do ?

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5897) TestNNWithQJM#testNewNamenodeTakesOverWriter occasionally fails in trunk

2015-09-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-5897.
--
Resolution: Cannot Reproduce

> TestNNWithQJM#testNewNamenodeTakesOverWriter occasionally fails in trunk
> 
>
> Key: HDFS-5897
> URL: https://issues.apache.org/jira/browse/HDFS-5897
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 5897-output.html
>
>
> From 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1665/testReport/junit/org.apache.hadoop.hdfs.qjournal/TestNNWithQJM/testNewNamenodeTakesOverWriter/
>  :
> {code}
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.read(SocketInputStream.java:129)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:401)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
> {code}
> I saw:
> {code}
> 2014-02-06 11:38:37,970 ERROR namenode.EditLogInputStream 
> (RedundantEditLogInputStream.java:nextOp(221)) - Got error reading edit log 
> input stream 
> http://localhost:40509/getJournal?jid=myjournal=3=-51%3A1571339494%3A0%3AtestClusterID;
>  failing over to edit log 
> http://localhost:56244/getJournal?jid=myjournal=3=-51%3A1571339494%3A0%3AtestClusterID
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException:
>  got premature end-of-file at txid 0; expected file to go up to 4
>   at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
>   at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:167)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:120)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:708)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:606)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:874)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:634)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:446)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:502)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:658)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:643)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1291)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:939)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:824)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:678)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:359)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
>   at 
> org.apache.hadoop.hdfs.qjournal.TestNNWithQJM.testNewNamenodeTakesOverWriter(TestNNWithQJM.java:145)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> 

[jira] [Assigned] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HDFS-6264:


Assignee: Ted Yu

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6264:
-
Status: Patch Available  (was: Open)

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791096#comment-14791096
 ] 

Ted Yu commented on HDFS-6264:
--

bq. The applied patch generated 1 new checkstyle issues (total was 229, now 
229).

I don't think there was additional issue introduced.

Please let me know what else needs to be done [~kihwal]

Thanks

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6264:
-
Description: 
FileSystem#createNonRecursive() is deprecated.
However, there is no DistributedFileSystem#create() implementation which throws 
exception if parent directory doesn't exist.
This limits clients' migration away from the deprecated method.

For HBase, IO fencing relies on the behavior of FileSystem#createNonRecursive().
Variant of create() method should be added which throws exception if parent 
directory doesn't exist.

  was:
FileSystem#createNonRecursive() is deprecated.
However, there is no DistributedFileSystem#create() implementation which throws 
exception if parent directory doesn't exist.
This limits clients' migration away from the deprecated method.

For HBase, IO fencing relies on the behavior of FileSystem#createNonRecursive().

Variant of create() method should be added which throws exception if parent 
directory doesn't exist.


> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2015-08-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6481:
-
Description: 
Ian Brooks reported the following stack trace:

{code}
2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
/user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
 block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
syncer encountered error, will retry. txid=211
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at 

[jira] [Updated] (HDFS-6290) File is not closed in OfflineImageViewerPB#run()

2015-08-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6290:
-
Description: 
{code}
  } else if (processor.equals(XML)) {
new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
r));
{code}
The RandomAccessFile instance should be closed before the method returns.

  was:
{code}
  } else if (processor.equals(XML)) {
new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
r));
{code}

The RandomAccessFile instance should be closed before the method returns.


 File is not closed in OfflineImageViewerPB#run()
 

 Key: HDFS-6290
 URL: https://issues.apache.org/jira/browse/HDFS-6290
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Reporter: Ted Yu
Priority: Minor

 {code}
   } else if (processor.equals(XML)) {
 new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
 r));
 {code}
 The RandomAccessFile instance should be closed before the method returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-08-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6264:
-
Description: 
FileSystem#createNonRecursive() is deprecated.
However, there is no DistributedFileSystem#create() implementation which throws 
exception if parent directory doesn't exist.
This limits clients' migration away from the deprecated method.

For HBase, IO fencing relies on the behavior of FileSystem#createNonRecursive().

Variant of create() method should be added which throws exception if parent 
directory doesn't exist.

  was:
FileSystem#createNonRecursive() is deprecated.

However, there is no DistributedFileSystem#create() implementation which throws 
exception if parent directory doesn't exist.
This limits clients' migration away from the deprecated method.

For HBase, IO fencing relies on the behavior of FileSystem#createNonRecursive().

Variant of create() method should be added which throws exception if parent 
directory doesn't exist.


 Provide FileSystem#create() variant which throws exception if parent 
 directory doesn't exist
 

 Key: HDFS-6264
 URL: https://issues.apache.org/jira/browse/HDFS-6264
 Project: Hadoop HDFS
  Issue Type: Task
  Components: namenode
Affects Versions: 2.4.0
Reporter: Ted Yu
  Labels: hbase
 Attachments: hdfs-6264-v1.txt


 FileSystem#createNonRecursive() is deprecated.
 However, there is no DistributedFileSystem#create() implementation which 
 throws exception if parent directory doesn't exist.
 This limits clients' migration away from the deprecated method.
 For HBase, IO fencing relies on the behavior of 
 FileSystem#createNonRecursive().
 Variant of create() method should be added which throws exception if parent 
 directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6290) File is not closed in OfflineImageViewerPB#run()

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6290:
-
Description: 
{code}
  } else if (processor.equals(XML)) {
new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
r));
{code}

The RandomAccessFile instance should be closed before the method returns.

  was:
{code}
  } else if (processor.equals(XML)) {
new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
r));
{code}
The RandomAccessFile instance should be closed before the method returns.


 File is not closed in OfflineImageViewerPB#run()
 

 Key: HDFS-6290
 URL: https://issues.apache.org/jira/browse/HDFS-6290
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Reporter: Ted Yu
Priority: Minor

 {code}
   } else if (processor.equals(XML)) {
 new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
 r));
 {code}
 The RandomAccessFile instance should be closed before the method returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6092:
-
Description: 
I discovered this when working on HBASE-10717
Here is sample code to reproduce the problem:
{code}
Path desPath = new Path(hdfs://127.0.0.1/);
FileSystem desFs = desPath.getFileSystem(conf);

String s = desFs.getCanonicalServiceName();
URI uri = desFs.getUri();
{code}

Canonical name string contains the default port - 8020
But uri doesn't contain port.
This would result in the following exception:
{code}
testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
0.001 sec   ERROR!
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
at java.net.InetSocketAddress.init(InetSocketAddress.java:224)
at 
org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
{code}
Thanks to Brando Li who helped debug this.

  was:
I discovered this when working on HBASE-10717
Here is sample code to reproduce the problem:
{code}
Path desPath = new Path(hdfs://127.0.0.1/);
FileSystem desFs = desPath.getFileSystem(conf);

String s = desFs.getCanonicalServiceName();
URI uri = desFs.getUri();
{code}
Canonical name string contains the default port - 8020
But uri doesn't contain port.
This would result in the following exception:
{code}
testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
0.001 sec   ERROR!
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
at java.net.InetSocketAddress.init(InetSocketAddress.java:224)
at 
org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
{code}
Thanks to Brando Li who helped debug this.


 DistributedFileSystem#getCanonicalServiceName() and 
 DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
 --

 Key: HDFS-6092
 URL: https://issues.apache.org/jira/browse/HDFS-6092
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Ted Yu
  Labels: BB2015-05-TBR
 Attachments: HDFS-6092-v4.patch, haosdent-HDFS-6092-v2.patch, 
 haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, hdfs-6092-v2.txt, hdfs-6092-v3.txt


 I discovered this when working on HBASE-10717
 Here is sample code to reproduce the problem:
 {code}
 Path desPath = new Path(hdfs://127.0.0.1/);
 FileSystem desFs = desPath.getFileSystem(conf);
 
 String s = desFs.getCanonicalServiceName();
 URI uri = desFs.getUri();
 {code}
 Canonical name string contains the default port - 8020
 But uri doesn't contain port.
 This would result in the following exception:
 {code}
 testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
 0.001 sec   ERROR!
 java.lang.IllegalArgumentException: port out of range:-1
 at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
 at java.net.InetSocketAddress.init(InetSocketAddress.java:224)
 at 
 org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
 {code}
 Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7101) Potential null dereference in DFSck#doWork()

2015-07-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7101:
-
Description: 
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}
If readLine() throws exception, lastLine may be null, leading to NPE.

  was:
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}

If readLine() throws exception, lastLine may be null, leading to NPE.


 Potential null dereference in DFSck#doWork()
 

 Key: HDFS-7101
 URL: https://issues.apache.org/jira/browse/HDFS-7101
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7101_001.patch


 {code}
 String lastLine = null;
 int errCode = -1;
 try {
   while ((line = input.readLine()) != null) {
 ...
 if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
   errCode = 0;
 {code}
 If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2015-06-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6481:
-
Description: 
Ian Brooks reported the following stack trace:
{code}
2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
/user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
 block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
syncer encountered error, will retry. txid=211
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at 

[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2015-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6481:
-
Description: 
Ian Brooks reported the following stack trace:

{code}
2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
/user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
 block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
syncer encountered error, will retry. txid=211
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at 

[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2015-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6481:
-
Description: 
Ian Brooks reported the following stack trace:
{code}
2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
/user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
 block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
syncer encountered error, will retry. txid=211
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at 

[jira] [Updated] (HDFS-7101) Potential null dereference in DFSck#doWork()

2015-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7101:
-
Description: 
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}

If readLine() throws exception, lastLine may be null, leading to NPE.

  was:
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}
If readLine() throws exception, lastLine may be null, leading to NPE.


 Potential null dereference in DFSck#doWork()
 

 Key: HDFS-7101
 URL: https://issues.apache.org/jira/browse/HDFS-7101
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7101_001.patch


 {code}
 String lastLine = null;
 int errCode = -1;
 try {
   while ((line = input.readLine()) != null) {
 ...
 if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
   errCode = 0;
 {code}
 If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7471:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

 TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
 -

 Key: HDFS-7471
 URL: https://issues.apache.org/jira/browse/HDFS-7471
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Binglin Chang
  Labels: BB2015-05-TBR
 Attachments: HDFS-7471.001.patch, PreCommit-HDFS-Build #9898 test - 
 testNumVersionsReportedCorrect [Jenkins].html


 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
 {code}
 FAILED:  
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Error Message:
 The map of version counts returned by DatanodeManager was not what it was 
 expected to be on iteration 237 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: The map of version counts returned by 
 DatanodeManager was not what it was expected to be on iteration 237 
 expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-05-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7471:
-
Description: 
From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :

{code}
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect

Error Message:
The map of version counts returned by DatanodeManager was not what it was 
expected to be on iteration 237 expected:0 but was:1

Stack Trace:
java.lang.AssertionError: The map of version counts returned by DatanodeManager 
was not what it was expected to be on iteration 237 expected:0 but was:1
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
{code}

  was:
From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
{code}
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect

Error Message:
The map of version counts returned by DatanodeManager was not what it was 
expected to be on iteration 237 expected:0 but was:1

Stack Trace:
java.lang.AssertionError: The map of version counts returned by DatanodeManager 
was not what it was expected to be on iteration 237 expected:0 but was:1
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
{code}


 TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
 -

 Key: HDFS-7471
 URL: https://issues.apache.org/jira/browse/HDFS-7471
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Binglin Chang
 Attachments: HDFS-7471.001.patch, PreCommit-HDFS-Build #9898 test - 
 testNumVersionsReportedCorrect [Jenkins].html


 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
 {code}
 FAILED:  
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Error Message:
 The map of version counts returned by DatanodeManager was not what it was 
 expected to be on iteration 237 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: The map of version counts returned by 
 DatanodeManager was not what it was expected to be on iteration 237 
 expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-04-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14501572#comment-14501572
 ] 

Ted Yu commented on HDFS-7471:
--

Looks like the test failure is gone.

Planning to resolve this JIRA.

 TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
 -

 Key: HDFS-7471
 URL: https://issues.apache.org/jira/browse/HDFS-7471
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Binglin Chang
 Attachments: HDFS-7471.001.patch, PreCommit-HDFS-Build #9898 test - 
 testNumVersionsReportedCorrect [Jenkins].html


 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
 {code}
 FAILED:  
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Error Message:
 The map of version counts returned by DatanodeManager was not what it was 
 expected to be on iteration 237 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: The map of version counts returned by 
 DatanodeManager was not what it was expected to be on iteration 237 
 expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-04-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14501572#comment-14501572
 ] 

Ted Yu commented on HDFS-7471:
--

Looks like the test failure is gone.

Planning to resolve this JIRA.

 TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
 -

 Key: HDFS-7471
 URL: https://issues.apache.org/jira/browse/HDFS-7471
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Binglin Chang
 Attachments: HDFS-7471.001.patch, PreCommit-HDFS-Build #9898 test - 
 testNumVersionsReportedCorrect [Jenkins].html


 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
 {code}
 FAILED:  
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Error Message:
 The map of version counts returned by DatanodeManager was not what it was 
 expected to be on iteration 237 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: The map of version counts returned by 
 DatanodeManager was not what it was expected to be on iteration 237 
 expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6946) TestBalancerWithSaslDataTransfer fails in trunk

2015-03-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6946:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

 TestBalancerWithSaslDataTransfer fails in trunk
 ---

 Key: HDFS-6946
 URL: https://issues.apache.org/jira/browse/HDFS-6946
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Assignee: Stephen Chu
Priority: Minor
 Attachments: HDFS-6946.1.patch, testBalancer0Integrity-failure.log


 From build #1849 :
 {code}
 REGRESSION:  
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity
 Error Message:
 Cluster failed to reached expected values of totalSpace (current: 750, 
 expected: 750), or usedSpace (current: 140, expected: 150), in more than 
 4 msec.
 Stack Trace:
 java.util.concurrent.TimeoutException: Cluster failed to reached expected 
 values of totalSpace (current: 750, expected: 750), or usedSpace (current: 
 140, expected: 150), in more than 4 msec.
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForHeartBeat(TestBalancer.java:253)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:578)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:551)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:759)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity(TestBalancerWithSaslDataTransfer.java:34)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6037) TestIncrementalBlockReports#testReplaceReceivedBlock fails occasionally in trunk

2015-03-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-6037.
--
Resolution: Cannot Reproduce

 TestIncrementalBlockReports#testReplaceReceivedBlock fails occasionally in 
 trunk
 

 Key: HDFS-6037
 URL: https://issues.apache.org/jira/browse/HDFS-6037
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu

 From 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1688/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestIncrementalBlockReports/testReplaceReceivedBlock/
  :
 {code}
 datanodeProtocolClientSideTranslatorPB.blockReceivedAndDeleted(
 any,
 any,
 any
 );
 Wanted 1 time:
 - at 
 org.apache.hadoop.hdfs.server.datanode.TestIncrementalBlockReports.testReplaceReceivedBlock(TestIncrementalBlockReports.java:198)
 But was 2 times. Undesired invocation:
 - at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reportReceivedDeletedBlocks(BPServiceActor.java:303)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7261) storageMap is accessed without synchronization in DatanodeDescriptor#updateHeartbeatState()

2015-02-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7261:
-
Description: 
Here is the code:
{code}
  failedStorageInfos = new HashSetDatanodeStorageInfo(
  storageMap.values());
{code}

In other places, the lock on DatanodeDescriptor.storageMap is held:
{code}
synchronized (storageMap) {
  final CollectionDatanodeStorageInfo storages = storageMap.values();
  return storages.toArray(new DatanodeStorageInfo[storages.size()]);
}
{code}

  was:
Here is the code:
{code}
  failedStorageInfos = new HashSetDatanodeStorageInfo(
  storageMap.values());
{code}
In other places, the lock on DatanodeDescriptor.storageMap is held:
{code}
synchronized (storageMap) {
  final CollectionDatanodeStorageInfo storages = storageMap.values();
  return storages.toArray(new DatanodeStorageInfo[storages.size()]);
}
{code}


 storageMap is accessed without synchronization in 
 DatanodeDescriptor#updateHeartbeatState()
 ---

 Key: HDFS-7261
 URL: https://issues.apache.org/jira/browse/HDFS-7261
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 Here is the code:
 {code}
   failedStorageInfos = new HashSetDatanodeStorageInfo(
   storageMap.values());
 {code}
 In other places, the lock on DatanodeDescriptor.storageMap is held:
 {code}
 synchronized (storageMap) {
   final CollectionDatanodeStorageInfo storages = storageMap.values();
   return storages.toArray(new DatanodeStorageInfo[storages.size()]);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-02-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338741#comment-14338741
 ] 

Ted Yu commented on HDFS-7471:
--

[~kihwal]:
What's your opinion ?

 TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
 -

 Key: HDFS-7471
 URL: https://issues.apache.org/jira/browse/HDFS-7471
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Binglin Chang
 Attachments: HDFS-7471.001.patch


 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
 {code}
 FAILED:  
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Error Message:
 The map of version counts returned by DatanodeManager was not what it was 
 expected to be on iteration 237 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: The map of version counts returned by 
 DatanodeManager was not what it was expected to be on iteration 237 
 expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7538) removedDst should be checked against null in the finally block of FSDirRenameOp#unprotectedRenameTo()

2015-02-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7538:
-
Resolution: Not a Problem
Status: Resolved  (was: Patch Available)

 removedDst should be checked against null in the finally block of 
 FSDirRenameOp#unprotectedRenameTo()
 -

 Key: HDFS-7538
 URL: https://issues.apache.org/jira/browse/HDFS-7538
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: hdfs-7538-001.patch


 {code}
 if (removedDst != null) {
   undoRemoveDst = false;
 ...
   if (undoRemoveDst) {
 // Rename failed - restore dst
 if (dstParent.isDirectory() 
 dstParent.asDirectory().isWithSnapshot()) {
   dstParent.asDirectory().undoRename4DstParent(removedDst,
 {code}
 If the first if check doesn't pass, removedDst would be null and 
 undoRemoveDst may be true.
 This combination would lead to NullPointerException in the finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7495) Remove updatePosition argument from DFSInputStream#getBlockAt()

2015-02-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334302#comment-14334302
 ] 

Ted Yu commented on HDFS-7495:
--

Thanks Colin for picking this up.

bq. this code isn't incorrect, it is just very, very tricky.

Agreed.

 Remove updatePosition argument from DFSInputStream#getBlockAt()
 ---

 Key: HDFS-7495
 URL: https://issues.apache.org/jira/browse/HDFS-7495
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ted Yu
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-7495.002.patch, hdfs-7495-001.patch


 There're two locks: one on DFSInputStream.this , one on 
 DFSInputStream.infoLock
 Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
 However, such order is not observed in DFSInputStream#getBlockAt() :
 {code}
 synchronized(infoLock) {
 ...
   if (updatePosition) {
 // synchronized not strictly needed, since we only get here
 // from synchronized caller methods
 synchronized(this) {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2015-02-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333026#comment-14333026
 ] 

Ted Yu commented on HDFS-7008:
--

+1

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-7008.1.patch, HDFS-7008.2.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2015-02-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14332496#comment-14332496
 ] 

Ted Yu commented on HDFS-7008:
--

+1

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-7008.1.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2015-02-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7495:
-
Resolution: Later
Status: Resolved  (was: Patch Available)

 Lock inversion in DFSInputStream#getBlockAt()
 -

 Key: HDFS-7495
 URL: https://issues.apache.org/jira/browse/HDFS-7495
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: hdfs-7495-001.patch


 There're two locks: one on DFSInputStream.this , one on 
 DFSInputStream.infoLock
 Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
 However, such order is not observed in DFSInputStream#getBlockAt() :
 {code}
 synchronized(infoLock) {
 ...
   if (updatePosition) {
 // synchronized not strictly needed, since we only get here
 // from synchronized caller methods
 synchronized(this) {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6081) TestRetryCacheWithHA#testCreateSymlink occasionally fails in trunk

2015-02-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-6081.
--
Resolution: Cannot Reproduce

 TestRetryCacheWithHA#testCreateSymlink occasionally fails in trunk
 --

 Key: HDFS-6081
 URL: https://issues.apache.org/jira/browse/HDFS-6081
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu

 From 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1696/testReport/junit/org.apache.hadoop.hdfs.server.namenode.ha/TestRetryCacheWithHA/testCreateSymlink/
  :
 {code}
 2014-03-09 13:18:47,515 WARN  security.UserGroupInformation 
 (UserGroupInformation.java:doAs(1600)) - PriviledgedActionException 
 as:jenkins (auth:SIMPLE) cause:java.io.IOException: failed to create link 
 /testlink either because the filename is invalid or the file exists
 2014-03-09 13:18:47,515 INFO  ipc.Server (Server.java:run(2093)) - IPC Server 
 handler 0 on 39303, call 
 org.apache.hadoop.hdfs.protocol.ClientProtocol.createSymlink from 
 127.0.0.1:32909 Call#682 Retry#1: error: java.io.IOException: failed to 
 create link /testlink either because the filename is invalid or the file 
 exists
 java.io.IOException: failed to create link /testlink either because the 
 filename is invalid or the file exists
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlinkInt(FSNamesystem.java:2053)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlink(FSNamesystem.java:2023)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createSymlink(NameNodeRpcServer.java:965)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createSymlink(ClientNamenodeProtocolServerSideTranslatorPB.java:844)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2071)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2067)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1597)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2065)
 2014-03-09 13:18:47,522 INFO  blockmanagement.BlockManager 
 (BlockManager.java:processMisReplicatesAsync(2475)) - Total number of blocks  
   = 1
 2014-03-09 13:18:47,523 INFO  blockmanagement.BlockManager 
 (BlockManager.java:processMisReplicatesAsync(2476)) - Number of invalid 
 blocks  = 0
 2014-03-09 13:18:47,523 INFO  blockmanagement.BlockManager 
 (BlockManager.java:processMisReplicatesAsync(2477)) - Number of 
 under-replicated blocks = 0
 2014-03-09 13:18:47,523 INFO  ha.TestRetryCacheWithHA 
 (TestRetryCacheWithHA.java:run(1162)) - Got Exception while calling 
 createSymlink
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): failed to create 
 link /testlink either because the filename is invalid or the file exists
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlinkInt(FSNamesystem.java:2053)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlink(FSNamesystem.java:2023)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createSymlink(NameNodeRpcServer.java:965)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createSymlink(ClientNamenodeProtocolServerSideTranslatorPB.java:844)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2071)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2067)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1597)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2065)
   at org.apache.hadoop.ipc.Client.call(Client.java:1409)
   at org.apache.hadoop.ipc.Client.call(Client.java:1362)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
   at $Proxy17.createSymlink(Unknown Source)
   at 
 

[jira] [Resolved] (HDFS-6177) TestHttpFSServer fails occasionally in trunk

2015-02-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-6177.
--
Resolution: Cannot Reproduce

 TestHttpFSServer fails occasionally in trunk
 

 Key: HDFS-6177
 URL: https://issues.apache.org/jira/browse/HDFS-6177
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor

 From https://builds.apache.org/job/Hadoop-hdfs-trunk/1716/consoleFull :
 {code}
 Running org.apache.hadoop.fs.http.server.TestHttpFSServer
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.424 sec  
 FAILURE! - in org.apache.hadoop.fs.http.server.TestHttpFSServer
 testDelegationTokenOperations(org.apache.hadoop.fs.http.server.TestHttpFSServer)
   Time elapsed: 0.559 sec   FAILURE!
 java.lang.AssertionError: expected:401 but was:403
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.hadoop.fs.http.server.TestHttpFSServer.testDelegationTokenOperations(TestHttpFSServer.java:352)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6501) TestCrcCorruption#testCorruptionDuringWrt sometimes fails in trunk

2015-02-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-6501.
--
Resolution: Cannot Reproduce

 TestCrcCorruption#testCorruptionDuringWrt sometimes fails in trunk
 --

 Key: HDFS-6501
 URL: https://issues.apache.org/jira/browse/HDFS-6501
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor

 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1767/ :
 {code}
 REGRESSION:  org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
 Error Message:
 test timed out after 5 milliseconds
 Stack Trace:
 java.lang.Exception: test timed out after 5 milliseconds
 at java.lang.Object.wait(Native Method)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2024)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2008)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2107)
 at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
 at 
 org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:98)
 at 
 org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:133)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6726) TestNamenodeCapacityReport fails intermittently

2015-02-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-6726.
--
Resolution: Cannot Reproduce

 TestNamenodeCapacityReport fails intermittently
 ---

 Key: HDFS-6726
 URL: https://issues.apache.org/jira/browse/HDFS-6726
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor

 From 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1812/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestNamenodeCapacityReport/testXceiverCount/
  :
 {code}
 java.io.IOException: Unable to close file because the last block does not 
 have enough number of replicas.
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2141)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2109)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:281)
 {code}
 There were multiple occurrences of 'Broken pipe', 'Connection reset by peer' 
 and 'Premature EOF from inputStream' exceptions in test output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7102) Null dereference in PacketReceiver#receiveNextPacket()

2015-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-7102.
--
Resolution: Later

 Null dereference in PacketReceiver#receiveNextPacket()
 --

 Key: HDFS-7102
 URL: https://issues.apache.org/jira/browse/HDFS-7102
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 {code}
   public void receiveNextPacket(ReadableByteChannel in) throws IOException {
 doRead(in, null);
 {code}
 doRead() would pass null as second parameter to (line 134):
 {code}
 doReadFully(ch, in, curPacketBuf);
 {code}
 which dereferences it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6946) TestBalancerWithSaslDataTransfer fails in trunk

2015-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317070#comment-14317070
 ] 

Ted Yu commented on HDFS-6946:
--

This test hasn't failed in recent builds.

 TestBalancerWithSaslDataTransfer fails in trunk
 ---

 Key: HDFS-6946
 URL: https://issues.apache.org/jira/browse/HDFS-6946
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Assignee: Stephen Chu
Priority: Minor
 Attachments: HDFS-6946.1.patch, testBalancer0Integrity-failure.log


 From build #1849 :
 {code}
 REGRESSION:  
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity
 Error Message:
 Cluster failed to reached expected values of totalSpace (current: 750, 
 expected: 750), or usedSpace (current: 140, expected: 150), in more than 
 4 msec.
 Stack Trace:
 java.util.concurrent.TimeoutException: Cluster failed to reached expected 
 values of totalSpace (current: 750, expected: 750), or usedSpace (current: 
 140, expected: 150), in more than 4 msec.
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForHeartBeat(TestBalancer.java:253)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:578)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:551)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:759)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity(TestBalancerWithSaslDataTransfer.java:34)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7083) TestDecommission#testIncludeByRegistrationName sometimes fails

2015-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-7083.
--
Resolution: Cannot Reproduce

 TestDecommission#testIncludeByRegistrationName sometimes fails
 --

 Key: HDFS-7083
 URL: https://issues.apache.org/jira/browse/HDFS-7083
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor

 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1874/ :
 {code}
 REGRESSION:  
 org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
 Error Message:
 test timed out after 36 milliseconds
 Stack Trace:
 java.lang.Exception: test timed out after 36 milliseconds
 at java.lang.Thread.sleep(Native Method)
 at 
 org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2015-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317251#comment-14317251
 ] 

Ted Yu commented on HDFS-6133:
--

In DFSOutputStream.java:
{code}
1452(targetPinnings == null ? false : targetPinnings[0]), 
targetPinnings);
{code}
Looks like the boolean parameter, pinning, is unnecessary: when targetPinnings 
is not null, pinning would be true. Otherwise pinning is false.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, datanode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Fix For: 2.7.0

 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, 
 HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, 
 HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, 
 HDFS-6133-9.patch, HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7756) DatanodeInfoWithStorage should be tagged Private

2015-02-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7756:
-
Attachment: hdfs-7756-002.patch

Alternative patch which restores method signature for 
LocatedBlock#getLocations()

 DatanodeInfoWithStorage should be tagged Private
 

 Key: HDFS-7756
 URL: https://issues.apache.org/jira/browse/HDFS-7756
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: hdfs-7756-001.patch, hdfs-7756-002.patch


 This is related to HDFS-7647
 DatanodeInfoWithStorage was introduced in 
 org.apache.hadoop.hdfs.server.protocol package whereas its base class, 
 DatanodeInfo, is in org.apache.hadoop.hdfs.protocol
 DatanodeInfo is tagged @InterfaceAudience.Private
 DatanodeInfoWithStorage should have the same tag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7756) DatanodeInfoWithStorage should be tagged Private

2015-02-09 Thread Ted Yu (JIRA)
Ted Yu created HDFS-7756:


 Summary: DatanodeInfoWithStorage should be tagged Private
 Key: HDFS-7756
 URL: https://issues.apache.org/jira/browse/HDFS-7756
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu


This is related to HDFS-7647

DatanodeInfoWithStorage was introduced in 
org.apache.hadoop.hdfs.server.protocol package whereas its base class, 
DatanodeInfo, is in org.apache.hadoop.hdfs.protocol

DatanodeInfo is tagged @InterfaceAudience.Private
DatanodeInfoWithStorage should have the same tag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7756) DatanodeInfoWithStorage should be tagged Private

2015-02-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7756:
-
Status: Patch Available  (was: Open)

 DatanodeInfoWithStorage should be tagged Private
 

 Key: HDFS-7756
 URL: https://issues.apache.org/jira/browse/HDFS-7756
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: hdfs-7756-001.patch


 This is related to HDFS-7647
 DatanodeInfoWithStorage was introduced in 
 org.apache.hadoop.hdfs.server.protocol package whereas its base class, 
 DatanodeInfo, is in org.apache.hadoop.hdfs.protocol
 DatanodeInfo is tagged @InterfaceAudience.Private
 DatanodeInfoWithStorage should have the same tag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7647) DatanodeManager.sortLocatedBlocks sorts DatanodeInfos but not StorageIDs

2015-02-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312969#comment-14312969
 ] 

Ted Yu commented on HDFS-7647:
--

DatanodeInfoWithStorage doesn't have audience annotation. Is it Private ?

Can DatanodeInfoWithStorage reside in org.apache.hadoop.hdfs.protocol as 
DatanodeInfo does ?

 DatanodeManager.sortLocatedBlocks sorts DatanodeInfos but not StorageIDs
 

 Key: HDFS-7647
 URL: https://issues.apache.org/jira/browse/HDFS-7647
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Milan Desai
Assignee: Milan Desai
 Fix For: 2.7.0

 Attachments: HDFS-7647-2.patch, HDFS-7647-3.patch, HDFS-7647-4.patch, 
 HDFS-7647-5.patch, HDFS-7647-6.patch, HDFS-7647-7.patch, HDFS-7647.patch


 DatanodeManager.sortLocatedBlocks() sorts the array of DatanodeInfos inside 
 each LocatedBlock, but does not touch the array of StorageIDs and 
 StorageTypes. As a result, the DatanodeInfos and StorageIDs/StorageTypes are 
 mismatched. The method is called by FSNamesystem.getBlockLocations(), so the 
 client will not know which StorageID/Type corresponds to which DatanodeInfo.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7647) DatanodeManager.sortLocatedBlocks sorts DatanodeInfos but not StorageIDs

2015-02-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312984#comment-14312984
 ] 

Ted Yu commented on HDFS-7647:
--

Sure.
I am testing a patch - will log JIRA momentarily.

 DatanodeManager.sortLocatedBlocks sorts DatanodeInfos but not StorageIDs
 

 Key: HDFS-7647
 URL: https://issues.apache.org/jira/browse/HDFS-7647
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Milan Desai
Assignee: Milan Desai
 Fix For: 2.7.0

 Attachments: HDFS-7647-2.patch, HDFS-7647-3.patch, HDFS-7647-4.patch, 
 HDFS-7647-5.patch, HDFS-7647-6.patch, HDFS-7647-7.patch, HDFS-7647.patch


 DatanodeManager.sortLocatedBlocks() sorts the array of DatanodeInfos inside 
 each LocatedBlock, but does not touch the array of StorageIDs and 
 StorageTypes. As a result, the DatanodeInfos and StorageIDs/StorageTypes are 
 mismatched. The method is called by FSNamesystem.getBlockLocations(), so the 
 client will not know which StorageID/Type corresponds to which DatanodeInfo.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7756) DatanodeInfoWithStorage should be tagged Private

2015-02-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7756:
-
Attachment: hdfs-7756-001.patch

Proposed patch.

In order not to break downstream projects which reorder DatanodeInfo's (such as 
HBase), I was thinking keeping the method signature for 
LocatedBlock#getLocations().

The caller to LocatedBlock#getLocations() can do instanceof check on the first 
element to see if DatanodeInfoWithStorage is returned.

Comment / suggestion is welcome.

 DatanodeInfoWithStorage should be tagged Private
 

 Key: HDFS-7756
 URL: https://issues.apache.org/jira/browse/HDFS-7756
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: hdfs-7756-001.patch


 This is related to HDFS-7647
 DatanodeInfoWithStorage was introduced in 
 org.apache.hadoop.hdfs.server.protocol package whereas its base class, 
 DatanodeInfo, is in org.apache.hadoop.hdfs.protocol
 DatanodeInfo is tagged @InterfaceAudience.Private
 DatanodeInfoWithStorage should have the same tag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   >