[jira] [Commented] (HDFS-9412) getBlocks occupies FSLock and takes too long to complete

2015-12-07 Thread He Tianyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046417#comment-15046417
 ] 

He Tianyi commented on HDFS-9412:
-

[~andrew.wang] Perhaps switching to unfair RWLock may cause other issues, since 
machine running NameNode does not necessarily have SMP architecture. 

I think this is due to having many small blocks in cluster, {{getBlocks}} is 
called by Balancer and will not return until exhausted or total size satisfies, 
and there are actually many threads doing the same thing 
({{dfs.balancer.dispatcherThreads}}). 
Besides decreasing number of threads, maybe we can make this faster either.

> getBlocks occupies FSLock and takes too long to complete
> 
>
> Key: HDFS-9412
> URL: https://issues.apache.org/jira/browse/HDFS-9412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: He Tianyi
>Assignee: He Tianyi
>
> {{getBlocks}} in {{NameNodeRpcServer}} acquires a read lock then may take a 
> long time to complete (probably several seconds, if number of blocks are too 
> much). 
> During this period, other threads attempting to acquire write lock will wait. 
> In an extreme case, RPC handlers are occupied by one reader thread calling 
> {{getBlocks}} and all other threads waiting for write lock, rpc server acts 
> like hung. Unfortunately, this tends to happen in heavy loaded cluster, since 
> read operations come and go fast (they do not need to wait), leaving write 
> operations waiting.
> Looks like we can optimize this thing like DN block report did in past, by 
> splitting the operation into smaller sub operations, and let other threads do 
> their work between each sub operation. The whole result is returned at once, 
> though (one thing different from DN block report). 
> I am not sure whether this will work. Any better idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044978#comment-15044978
 ] 

Brahma Reddy Battula commented on HDFS-9513:


This issue is already reported in HDFS-6481 and its throwing 
HadoopIllegalArgmentException instead of AIOBException.

I think it would be good to provide a way to workaround this problem for the 
older clients rather than rejecting their requests straightway considering the 
cluster size (8000+) mentioned here to upgrade.
Patch provided might work as workaround for older clients. Any thoughts 
[~szetszwo]/[~arpitagarwal]

[~Deng FEI],I have few comments about the patch.

1. DatanodeDescriptor#getPerferedStorageInfo(), might not be required. instead 
one DUMMY storage, May be first storage of DN, can be chosen.

Currently {{DatanodeManager#getDatanodeStorageInfos(..)}} is used for 3 calls, 
getAdditionalDatanode(), updatePipeline(), commitBlockSynchronization(). In all 
these operations, Might get the same problem in case of old clients. Since 
client dont understand storages, we need not worry to select storage.
   a. getAdditionalDatanode(), updatePipeline() will be calls from clients 
directly. But only updatePipeline results will be stored as is. Even then, once 
the Incremental Block reports is received, these storage will be updated to 
have proper details in namenode.
   b. commitBlockSynchronization() will be from datanodes, and even in this 
case, it might not lead to this workaround as it will have new targets.

Updated code can look like this
{code}
if (old) {
// CHOOSE First storage as dummy for writing support of Old client as a
// workaround for HDFS-9513.
// Later when the block report comes, actual storage Id will be
// restored in blocks map.
storages[i] = dd.getStorageInfos()[0];
} else {
storages[i] = dd.getStorageInfo(storageIDs[i]);
}
{code}
2. Also, may be we can have a config for this workaround and only use when 
required. ? 

3. Can you post a patch for trunk and latest branch-2.7, with HDFS-6481 
included? Can throw exception, in case of config disabled, or request is not 
from old client.

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
> Attachments: patch.HDFS-9513.20151207
>
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at 

[jira] [Updated] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9515:
--
Attachment: HDFS-9515.001.patch

Rev01. This fixes the NPE in TestDFSZKFailoverController.

> NPE in TestDFSZKFailoverController due to binding exception in 
> MiniDFSCluster.initMiniDFSCluster()
> --
>
> Key: HDFS-9515
> URL: https://issues.apache.org/jira/browse/HDFS-9515
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9515.001.patch
>
>
> If MiniDFSCluster constructor throws an exception, the cluster object is not 
> assigned, so shutdown() call not be called on the object.
> I saw in a recent Jenkins job where binding error threw an exception, and 
> later on the NPE of cluster.shutdown() hid the real cause of the test failure.
> HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9486) libhdfs++ Fix valgrind failures when using more than 1 io_service worker thread.

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045022#comment-15045022
 ] 

James Clampffer commented on HDFS-9486:
---

That's odd considering how tiny this patch is.  The local tests build/run 
(without valgrind) in docker.  I'll give the valgrind stuff a shot in docker 
today; I didn't last week because valgrind and some other tools I use aren't in 
the docker image and it take a while to download.

> libhdfs++ Fix valgrind failures when using more than 1 io_service worker 
> thread.
> 
>
> Key: HDFS-9486
> URL: https://issues.apache.org/jira/browse/HDFS-9486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9486-stacks-sanitized.txt, 
> HDFS-9486.HDFS-8707.000.patch, HDFS-9486.HDFS-8707.001.patch
>
>
> Valgrind catches an invalid read of size 8.  Setup: 4 io_service worker 
> threads, 64 threads doing open-read-close on a small file.
> Stack:
> ==8351== Invalid read of size 8
> ==8351==at 0x51F45C: 
> asio::detail::reactive_socket_recv_op asio::detail::read_op asio::stream_socket_service >, asio::mutable_buffers_1, 
> asio::detail::transfer_all_t, std::_Bind asio::stream_socket_service > >::*)(std::error_code const&, 
> unsigned long)> 
> (hdfs::RpcConnectionImpl asio::stream_socket_service > >*, std::_Placeholder<1>, 
> std::_Placeholder<2>)> > >::do_complete(asio::detail::task_io_service*, 
> asio::detail::task_io_service_operation*, std::error_code const&, unsigned 
> long) (functional:601)
> ==8351==by 0x508B10: hdfs::IoServiceImpl::Run() 
> (task_io_service_operation.hpp:37)
> ==8351==by 0x55BCBEF: ??? (in 
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
> ==8351==by 0x5A2D181: start_thread (pthread_create.c:312)
> ==8351==by 0x5D3D47C: clone (clone.S:111)
> ==8351==  Address 0x67e3eb0 is 0 bytes inside a block of size 216 free'd
> ==8351==at 0x4C2C2BC: operator delete(void*) (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==8351==by 0x51F7B2: 
> hdfs::RpcConnectionImpl asio::stream_socket_service > >::~RpcConnectionImpl() 
> (rpc_connection.h:32)
> ==8351==by 0x50C104: hdfs::FileSystemImpl::~FileSystemImpl() 
> (unique_ptr.h:67)
> ==8351==by 0x503A10: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (unique_ptr.h:67)
> ==8351==by 0x503B28: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (hdfs_cpp.cc:140)
> ==8351==by 0x503580: hdfs_internal::~hdfs_internal() (unique_ptr.h:67)
> ==8351==by 0x502FEE: hdfsDisconnect (hdfs.cc:127)
> ==8351==by 0x5010B7: main (threaded_stress_test.cc:74)
> ==8351== 
> pure virtual method called
> terminate called without an active exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9038) Reserved space is erroneously counted towards non-DFS used.

2015-12-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045002#comment-15045002
 ] 

Brahma Reddy Battula commented on HDFS-9038:


Ping [~cnauroth]/[~arpitagarwal]/[~vinayrpet] once again..

> Reserved space is erroneously counted towards non-DFS used.
> ---
>
> Key: HDFS-9038
> URL: https://issues.apache.org/jira/browse/HDFS-9038
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9038-002.patch, HDFS-9038-003.patch, 
> HDFS-9038-004.patch, HDFS-9038-005.patch, HDFS-9038.patch
>
>
> HDFS-5215 changed the DataNode volume available space calculation to consider 
> the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
> property.  As a side effect, reserved space is now counted towards non-DFS 
> used.  I don't believe it was intentional to change the definition of non-DFS 
> used.  This issue proposes restoring the prior behavior: do not count 
> reserved space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045033#comment-15045033
 ] 

Wei-Chiu Chuang commented on HDFS-9515:
---

https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/660/testReport/org.apache.hadoop.hdfs.tools/TestDFSZKFailoverController/testManualFailoverWithDFSHAAdmin_2/

> NPE in TestDFSZKFailoverController due to binding exception in 
> MiniDFSCluster.initMiniDFSCluster()
> --
>
> Key: HDFS-9515
> URL: https://issues.apache.org/jira/browse/HDFS-9515
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> If MiniDFSCluster constructor throws an exception, the cluster object is not 
> assigned, so shutdown() call not be called on the object.
> I saw in a recent Jenkins job where binding error threw an exception, and 
> later on the NPE of cluster.shutdown() hid the real cause of the test failure.
> HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-07 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9515:
-

 Summary: NPE in TestDFSZKFailoverController due to binding 
exception in MiniDFSCluster.initMiniDFSCluster()
 Key: HDFS-9515
 URL: https://issues.apache.org/jira/browse/HDFS-9515
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: Jenkins
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


If MiniDFSCluster constructor throws an exception, the cluster object is not 
assigned, so shutdown() call not be called on the object.

I saw in a recent Jenkins job where binding error threw an exception, and later 
on the NPE of cluster.shutdown() hid the real cause of the test failure.

HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045163#comment-15045163
 ] 

Wei-Chiu Chuang commented on HDFS-9514:
---

The test only modified TestDistributedFileSystem and therefore the test 
failures appear unrelated.

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9514.001.patch
>
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9516) truncate file fails with data dirs on multiple disks

2015-12-07 Thread Bogdan Raducanu (JIRA)
Bogdan Raducanu created HDFS-9516:
-

 Summary: truncate file fails with data dirs on multiple disks
 Key: HDFS-9516
 URL: https://issues.apache.org/jira/browse/HDFS-9516
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Bogdan Raducanu


FileSystem.truncate returns false (no exception) but the file is never closed 
and not writable after this.

It seems to be because of copy on truncate which is used because the system is 
in upgrade state. In this case a rename between devices is attempted.
See attached log and repro code.
Probably also affects truncate snapshotted file when copy on truncate is also 
used.
Possibly it affects not only truncate but any block recovery.


I think the problem is in updateReplicaUnderRecovery
{code}
ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
newBlockId, recoveryId, rur.getVolume(), blockFile.getParentFile(),
newlength);
{code}
blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to 
choose any volume so rur.getVolume() is not where the block is located.
 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9516) truncate file fails with data dirs on multiple disks

2015-12-07 Thread Bogdan Raducanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bogdan Raducanu updated HDFS-9516:
--
Attachment: truncate.dn.log

> truncate file fails with data dirs on multiple disks
> 
>
> Key: HDFS-9516
> URL: https://issues.apache.org/jira/browse/HDFS-9516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
> Attachments: Main.java, truncate.dn.log
>
>
> FileSystem.truncate returns false (no exception) but the file is never closed 
> and not writable after this.
> It seems to be because of copy on truncate which is used because the system 
> is in upgrade state. In this case a rename between devices is attempted.
> See attached log and repro code.
> Probably also affects truncate snapshotted file when copy on truncate is also 
> used.
> Possibly it affects not only truncate but any block recovery.
> I think the problem is in updateReplicaUnderRecovery
> {code}
> ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
> newBlockId, recoveryId, rur.getVolume(), 
> blockFile.getParentFile(),
> newlength);
> {code}
> blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to 
> choose any volume so rur.getVolume() is not where the block is located.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9516) truncate file fails with data dirs on multiple disks

2015-12-07 Thread Bogdan Raducanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bogdan Raducanu updated HDFS-9516:
--
Attachment: Main.java

> truncate file fails with data dirs on multiple disks
> 
>
> Key: HDFS-9516
> URL: https://issues.apache.org/jira/browse/HDFS-9516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
> Attachments: Main.java, truncate.dn.log
>
>
> FileSystem.truncate returns false (no exception) but the file is never closed 
> and not writable after this.
> It seems to be because of copy on truncate which is used because the system 
> is in upgrade state. In this case a rename between devices is attempted.
> See attached log and repro code.
> Probably also affects truncate snapshotted file when copy on truncate is also 
> used.
> Possibly it affects not only truncate but any block recovery.
> I think the problem is in updateReplicaUnderRecovery
> {code}
> ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
> newBlockId, recoveryId, rur.getVolume(), 
> blockFile.getParentFile(),
> newlength);
> {code}
> blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to 
> choose any volume so rur.getVolume() is not where the block is located.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9472) concat() API does not resolve the .reserved path

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045291#comment-15045291
 ] 

Hadoop QA commented on HDFS-9472:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 160m 55s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 20s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 316m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestLocalDFS |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | 

[jira] [Updated] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9515:
--
Status: Patch Available  (was: Open)

> NPE in TestDFSZKFailoverController due to binding exception in 
> MiniDFSCluster.initMiniDFSCluster()
> --
>
> Key: HDFS-9515
> URL: https://issues.apache.org/jira/browse/HDFS-9515
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9515.001.patch, HDFS-9515.002.patch
>
>
> If MiniDFSCluster constructor throws an exception, the cluster object is not 
> assigned, so shutdown() call not be called on the object.
> I saw in a recent Jenkins job where binding error threw an exception, and 
> later on the NPE of cluster.shutdown() hid the real cause of the test failure.
> HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9514:
--
Attachment: HDFS-9514.002.patch

Rev02: fixed test logic.

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9514.001.patch, HDFS-9514.002.patch
>
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9417) Clean up the RAT warnings in the HDFS-8707 branch.

2015-12-07 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045319#comment-15045319
 ] 

Bob Hansen commented on HDFS-9417:
--

Passes asf license checks.  If anyone feels like reviewing, let's get it 
checked in before we get more rogue code sneaking by the asf check.

> Clean up the RAT warnings in the HDFS-8707 branch.
> --
>
> Key: HDFS-9417
> URL: https://issues.apache.org/jira/browse/HDFS-9417
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Bob Hansen
> Attachments: HDFS-9417.HDFS-8707.000.patch, 
> HDFS-9417.HDFS-8707.001.patch, HDFS-9417.HDFS-8707.002.patch, 
> HDFS-9417.HDFS-8707.002.patch, HDFS-9417.HDFS-8707.003.patch, 
> HDFS-9417.HDFS-8707.004.patch
>
>
> Recent jenkins builds reveals that the pom.xml in the HDFS-8707 branch does 
> not currently exclude third-party files. The RAT plugin generates warnings as 
> these files do not have Apache headers.
> The warnings need to be suppressed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8901) Use ByteBuffer in striping positional read

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045171#comment-15045171
 ] 

Hadoop QA commented on HDFS-8901:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project (total was 158, now 150). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 21s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 266m 19s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 4m 41s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 429m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestParallelShortCircuitLegacyRead |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | 

[jira] [Commented] (HDFS-9347) Invariant assumption in TestQuorumJournalManager.shutdown() is wrong

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045138#comment-15045138
 ] 

Wei-Chiu Chuang commented on HDFS-9347:
---

Hi [~walter.k.su] Thank you very much for your last review.
How do you feel about this version of patch?

Thank you again.

> Invariant assumption in TestQuorumJournalManager.shutdown() is wrong
> 
>
> Key: HDFS-9347
> URL: https://issues.apache.org/jira/browse/HDFS-9347
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9347.001.patch, HDFS-9347.002.patch
>
>
> The code
> {code:title=TestTestQuorumJournalManager.java|borderStyle=solid}
> @After
>   public void shutdown() throws IOException {
> IOUtils.cleanup(LOG, toClose.toArray(new Closeable[0]));
> 
> // Should not leak clients between tests -- this can cause flaky tests.
> // (See HDFS-4643)
> GenericTestUtils.assertNoThreadsMatching(".*IPC Client.*");
> 
> if (cluster != null) {
>   cluster.shutdown();
> }
>   }
> {code}
> implicitly assumes when the call returns from IOUtils.cleanup() (which calls 
> close() on QuorumJournalManager object), all IPC client connection threads 
> are terminated. However, there is no internal implementation that enforces 
> this assumption. Even if the bug reported in HADOOP-12532 is fixed, the 
> internal code still only ensures IPC connections are terminated, but not the 
> thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9489) Enable CI infrasructure to use libhdfs++ hdfsRead

2015-12-07 Thread Stephen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen reassigned HDFS-9489:
-

Assignee: Stephen

> Enable CI infrasructure to use libhdfs++ hdfsRead
> -
>
> Key: HDFS-9489
> URL: https://issues.apache.org/jira/browse/HDFS-9489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Stephen
>
> CI tests are built against a shim layer that delegates work to libhdfs or 
> libhdfs++.  Now that stateful reads are available to libhdfs++ the CI system 
> should delegate hdfsRead to libhdfs++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9486) libhdfs++ Fix valgrind failures when using more than 1 io_service worker thread.

2015-12-07 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045317#comment-15045317
 ] 

Bob Hansen commented on HDFS-9486:
--

I don't think it is caused by this patch, but I was wondering if we were 
expecting this patch to clean it up.  

Your writeup on this patch implies that it's a fix for a different failure 
case, so this is probably an improvment.  If you expected it to clean up the CI 
failure, then we have more work do do.

If the former, +1.  If the latter, let's fix that.

> libhdfs++ Fix valgrind failures when using more than 1 io_service worker 
> thread.
> 
>
> Key: HDFS-9486
> URL: https://issues.apache.org/jira/browse/HDFS-9486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9486-stacks-sanitized.txt, 
> HDFS-9486.HDFS-8707.000.patch, HDFS-9486.HDFS-8707.001.patch
>
>
> Valgrind catches an invalid read of size 8.  Setup: 4 io_service worker 
> threads, 64 threads doing open-read-close on a small file.
> Stack:
> ==8351== Invalid read of size 8
> ==8351==at 0x51F45C: 
> asio::detail::reactive_socket_recv_op asio::detail::read_op asio::stream_socket_service >, asio::mutable_buffers_1, 
> asio::detail::transfer_all_t, std::_Bind asio::stream_socket_service > >::*)(std::error_code const&, 
> unsigned long)> 
> (hdfs::RpcConnectionImpl asio::stream_socket_service > >*, std::_Placeholder<1>, 
> std::_Placeholder<2>)> > >::do_complete(asio::detail::task_io_service*, 
> asio::detail::task_io_service_operation*, std::error_code const&, unsigned 
> long) (functional:601)
> ==8351==by 0x508B10: hdfs::IoServiceImpl::Run() 
> (task_io_service_operation.hpp:37)
> ==8351==by 0x55BCBEF: ??? (in 
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
> ==8351==by 0x5A2D181: start_thread (pthread_create.c:312)
> ==8351==by 0x5D3D47C: clone (clone.S:111)
> ==8351==  Address 0x67e3eb0 is 0 bytes inside a block of size 216 free'd
> ==8351==at 0x4C2C2BC: operator delete(void*) (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==8351==by 0x51F7B2: 
> hdfs::RpcConnectionImpl asio::stream_socket_service > >::~RpcConnectionImpl() 
> (rpc_connection.h:32)
> ==8351==by 0x50C104: hdfs::FileSystemImpl::~FileSystemImpl() 
> (unique_ptr.h:67)
> ==8351==by 0x503A10: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (unique_ptr.h:67)
> ==8351==by 0x503B28: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (hdfs_cpp.cc:140)
> ==8351==by 0x503580: hdfs_internal::~hdfs_internal() (unique_ptr.h:67)
> ==8351==by 0x502FEE: hdfsDisconnect (hdfs.cc:127)
> ==8351==by 0x5010B7: main (threaded_stress_test.cc:74)
> ==8351== 
> pure virtual method called
> terminate called without an active exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9515:
--
Attachment: HDFS-9515.002.patch

Rev02: Fixed all potential NPEs tearing down test clusters.

> NPE in TestDFSZKFailoverController due to binding exception in 
> MiniDFSCluster.initMiniDFSCluster()
> --
>
> Key: HDFS-9515
> URL: https://issues.apache.org/jira/browse/HDFS-9515
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9515.001.patch, HDFS-9515.002.patch
>
>
> If MiniDFSCluster constructor throws an exception, the cluster object is not 
> assigned, so shutdown() call not be called on the object.
> I saw in a recent Jenkins job where binding error threw an exception, and 
> later on the NPE of cluster.shutdown() hid the real cause of the test failure.
> HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045164#comment-15045164
 ] 

Wei-Chiu Chuang commented on HDFS-9514:
---

Sorry there are a few TestDistributedFileSystem failures. I'll work out an 
update.

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9514.001.patch
>
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9285) testTruncateWithDataNodesRestartImmediately occasionally fails

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-9285:
-

Assignee: Wei-Chiu Chuang

> testTruncateWithDataNodesRestartImmediately occasionally fails
> --
>
> Key: HDFS-9285
> URL: https://issues.apache.org/jira/browse/HDFS-9285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2462/testReport/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestartImmediately/
> Note that this is similar, but appears to be a different failure than 
> HDFS-8729.
> Error Message
> inode should complete in ~3 ms.
> Expected: is 
>  but: was 
> Stacktrace
> java.lang.AssertionError: inode should complete in ~3 ms.
> Expected: is 
>  but: was 
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1176)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1171)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:798)
> Log excerpt:
> 2015-10-22 06:34:47,281 [IPC Server handler 8 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,382 [IPC Server handler 9 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,484 [IPC Server handler 0 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,585 [IPC Server handler 1 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,689 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1889)) - Shutting down the Mini HDFS Cluster
> 2015-10-22 06:34:47,690 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNodes(1935)) - Shutting down DataNode 2
> 2015-10-22 06:34:47,690 [main] WARN  datanode.DirectoryScanner 
> (DirectoryScanner.java:shutdown(529)) - DirectoryScanner: shutdown has been 
> called



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9285) testTruncateWithDataNodesRestartImmediately occasionally fails

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045194#comment-15045194
 ] 

Wei-Chiu Chuang commented on HDFS-9285:
---

I am assigning this JIRA to myself and will follow [~walter.k.su]'s analysis to 
work out a fix. Thanks!

> testTruncateWithDataNodesRestartImmediately occasionally fails
> --
>
> Key: HDFS-9285
> URL: https://issues.apache.org/jira/browse/HDFS-9285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2462/testReport/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestartImmediately/
> Note that this is similar, but appears to be a different failure than 
> HDFS-8729.
> Error Message
> inode should complete in ~3 ms.
> Expected: is 
>  but: was 
> Stacktrace
> java.lang.AssertionError: inode should complete in ~3 ms.
> Expected: is 
>  but: was 
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1176)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1171)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:798)
> Log excerpt:
> 2015-10-22 06:34:47,281 [IPC Server handler 8 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,382 [IPC Server handler 9 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,484 [IPC Server handler 0 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,585 [IPC Server handler 1 on 8020] INFO  
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7358)) - allowed=true   
>ugi=jenkins (auth:SIMPLE)   ip=/127.0.0.1   cmd=open
> src=/test/testTruncateWithDataNodesRestartImmediately   dst=null
> perm=null   proto=rpc
> 2015-10-22 06:34:47,689 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1889)) - Shutting down the Mini HDFS Cluster
> 2015-10-22 06:34:47,690 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNodes(1935)) - Shutting down DataNode 2
> 2015-10-22 06:34:47,690 [main] WARN  datanode.DirectoryScanner 
> (DirectoryScanner.java:shutdown(529)) - DirectoryScanner: shutdown has been 
> called



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045148#comment-15045148
 ] 

Hadoop QA commented on HDFS-9514:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 57s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 179m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestDistributedFileSystem |
| JDK v1.8.0_66 Timed out junit tests | 
org.apache.hadoop.hdfs.server.mover.TestStorageMover |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | 

[jira] [Updated] (HDFS-9373) Show friendly information to user when client succeeds the writing with some failed streamers

2015-12-07 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-9373:

Attachment: HDFS-9373-002.patch

Thanks Zhe and Daniel’s review. Just update the patch according the newest 
trunk code.
The failed block id can be achieved by other log information, so we just need 
to tell user which block groups have the corrupt blocks.


> Show friendly information to user when client succeeds the writing with some 
> failed streamers
> -
>
> Key: HDFS-9373
> URL: https://issues.apache.org/jira/browse/HDFS-9373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-9373-001.patch, HDFS-9373-002.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client 
> may still succeed to write the data. But several exceptions are thrown to 
> user and user has to check the reasons.  The friendly way is just inform user 
> that some streamers fail when writing a block group. It’s not necessary to 
> show the details of exceptions because a small number of stream failures is 
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk 
> because the corrupt of any block will cause all the six blocks' data lost. We 
> should give obvious warning to user when this occurs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2015-12-07 Thread wqijun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044610#comment-15044610
 ] 

wqijun commented on HDFS-7337:
--

Hi Kai,

We would like to add new RS coder in the EC framework, like RS(8,4) coder. More 
importantly, we want to leverage C library to accelerate this coder, not only 
for INTEL Chip, but also for IBM POWER chip.  We are not sure which branch 
should our work be based on, 7337 or 7285? Should we add new JIRA branch? In 
addition, I have downloaded the latest Hadoop Trunk version and found there are 
native ISA_L acceleration files but no coder class for ISA_L. Are you plan to 
add these coder classes into Hadoop Trunk version? Thanks a lot!

> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec-v2.pdf, PluggableErasureCodec-v3.pdf, 
> PluggableErasureCodec.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9373) Show friendly information to user when client succeeds the writing with some failed streamers

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044583#comment-15044583
 ] 

Hadoop QA commented on HDFS-9373:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs-client (total was 8, now 9). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
33s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12776049/HDFS-9373-002.patch |
| JIRA Issue | HDFS-9373 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 96a48873af09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven 

[jira] [Updated] (HDFS-8901) Use ByteBuffer in striping positional read

2015-12-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8901:

Attachment: HDFS-8901-v4.patch

Thanks [~zhz] for the check. 
Updated the patch:
1. Fixed the test failure by bringing back necessary codes from HDFS-8905;
2. Cleaned up some old check style issues;
3. Refined a bit the *TestDFSStripedInputStream* test.

> Use ByteBuffer in striping positional read
> --
>
> Key: HDFS-8901
> URL: https://issues.apache.org/jira/browse/HDFS-8901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-8901-v2.patch, HDFS-8901-v3.patch, 
> HDFS-8901-v4.patch, initial-poc.patch
>
>
> Native erasure coder prefers to direct ByteBuffer for performance 
> consideration. To prepare for it, this change uses ByteBuffer through the 
> codes in implementing striping position read. It will also fix avoiding 
> unnecessary data copying between striping read chunk buffers and decode input 
> buffers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044717#comment-15044717
 ] 

邓飞 commented on HDFS-9513:
--

  usually,can get the storageInfo by block,but in the case,it's seems to not 
work.
  the DatanodeManager#getDatanodeStorageInfos called by 
1. FSNameSystem#commitBlockSynchronization
2. FSNameSystem#getAdditionalDataNode
3. FSNameSystem#updatePipelingInternal
  the first method is called from datanodes,it's compatibility. 
  when add a new block,the block with storageInfo will store at 
inodefile/blockMap/editLog,but if add new datanode for the pipeline,the 
addtional datanode and storageInfo not store,because the client need to try 
transfer the RWB block.
   so when call FSNameSystem#updatePipelingInternal,the old block's storageInfo 
is not enough,so can't recovery the storageInfo anyway.
   and if client is older than 2.3.0,the storageInfo is not useful,although the 
NN choose  locatedBlock with storageInfo ,but client wirte to DN not pass the 
storageInfoId

  

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> 

[jira] [Commented] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044807#comment-15044807
 ] 

Steve Loughran commented on HDFS-9514:
--

{code}
Error Message

wrong exception:java.lang.AssertionError: write should timeout
Stacktrace

java.lang.AssertionError: wrong exception:java.lang.AssertionError: write 
should timeout
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1148)
{code}

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-9514:
-

Assignee: Wei-Chiu Chuang

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044813#comment-15044813
 ] 

Wei-Chiu Chuang commented on HDFS-9514:
---

Thanks for reporting the issue. I would like to work on it. There are also a 
few other similar cases where a Throwable is thrown but swallowed.

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

邓飞 updated HDFS-9513:
-
Attachment: patch.HDFS-9513.20151207

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
> Attachments: patch.HDFS-9513.20151207
>
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9513 started by 邓飞.

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9472) concat() API does not resolve the .reserved path

2015-12-07 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-9472:
---
Attachment: HDFS-9472-01.patch

> concat() API does not resolve the .reserved path
> 
>
> Key: HDFS-9472
> URL: https://issues.apache.org/jira/browse/HDFS-9472
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-9472-00.patch, HDFS-9472-01.patch
>
>
> dfs#concat() API doesn't resolve the {{/.reserved/raw}} path.  For example, 
> if the input paths of the form {{/.reserved/raw/ezone/a}} then this API 
> doesn't work properly. IMHO will discuss here to support this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9472) concat() API does not resolve the .reserved path

2015-12-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15044826#comment-15044826
 ] 

Rakesh R commented on HDFS-9472:


Attached a new patch throwing {{InvalidPathException}}. Please review, thanks!. 
Will change the jira subjectline/description once we agreed on this.

> concat() API does not resolve the .reserved path
> 
>
> Key: HDFS-9472
> URL: https://issues.apache.org/jira/browse/HDFS-9472
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-9472-00.patch, HDFS-9472-01.patch
>
>
> dfs#concat() API doesn't resolve the {{/.reserved/raw}} path.  For example, 
> if the input paths of the form {{/.reserved/raw/ezone/a}} then this API 
> doesn't work properly. IMHO will discuss here to support this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9514:


 Summary: TestDistributedFileSystem.testDFSClientPeerWriteTimeout 
failing; exception being swallowed
 Key: HDFS-9514
 URL: https://issues.apache.org/jira/browse/HDFS-9514
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran


{{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with the 
wrong exception being raised...reporter isn't using the {{GenericTestUtils}} 
code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9513 started by 邓飞.

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9513 stopped by 邓飞.

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9514:
--
Status: Patch Available  (was: Open)

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9514.001.patch
>
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9514:
--
Attachment: HDFS-9514.001.patch

Throw a new IOException which contains the cause of the exception.

Both testDFSClientPeerReadTimeout() and testDFSClientPeerWriteTimeout() have 
the same exception handling logic, so I made the patch for both methods.

> TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception 
> being swallowed
> --
>
> Key: HDFS-9514
> URL: https://issues.apache.org/jira/browse/HDFS-9514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9514.001.patch
>
>
> {{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with 
> the wrong exception being raised...reporter isn't using the 
> {{GenericTestUtils}} code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9319) Make DatanodeInfo thread safe

2015-12-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046242#comment-15046242
 ] 

Mingliang Liu commented on HDFS-9319:
-

+1 (non-binding)

> Make DatanodeInfo thread safe
> -
>
> Key: HDFS-9319
> URL: https://issues.apache.org/jira/browse/HDFS-9319
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9319.000.patch
>
>
> This jira plans to make DatanodeInfo's internal states independent of 
> external locks. Note because DatanodeInfo extends DatanodeID, we still need 
> to change DatanodeID as a follow-on work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045563#comment-15045563
 ] 

Hadoop QA commented on HDFS-9514:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 7s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 17s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 189m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.TestRecoverStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | 

[jira] [Updated] (HDFS-9034) "StorageTypeStats" Metric should not count failed storage.

2015-12-07 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9034:
-
Attachment: HDFS-9034.04.patch

Attached updated patch..
Fixed test case and checkstyle warnings.
Please review..

> "StorageTypeStats" Metric should not count failed storage.
> --
>
> Key: HDFS-9034
> URL: https://issues.apache.org/jira/browse/HDFS-9034
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9034.01.patch, HDFS-9034.02.patch, 
> HDFS-9034.03.patch, HDFS-9034.04.patch, dfsStorage_NN_UI2.png
>
>
> When we remove one storage type from all the DNs, still NN UI shows entry of 
> those storage type --
> Ex:for ARCHIVE
> Steps--
> 1. ARCHIVE Storage type was added for all DNs
> 2. Stop DNs
> 3. Removed ARCHIVE Storages from all DNs
> 4. Restarted DNs
> NN UI shows below --
> DFS Storage Types
> Storage Type Configured Capacity Capacity Used Capacity Remaining 
> ARCHIVE   57.18 GB64 KB (0%)  39.82 GB (69.64%)   64 KB   
> 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9517) Make TestDistCpUtils.testUnpackAttributes testable

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045569#comment-15045569
 ] 

Hadoop QA commented on HDFS-9517:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 23s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 4s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_85. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 3s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12776155/HDFS-9517.001.patch |
| JIRA Issue | HDFS-9517 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 808987efafd2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01a641b |
| findbugs | v3.0.0 |
| JDK v1.7.0_85  Test Results | 

[jira] [Updated] (HDFS-9228) libhdfs++ should respect NN retry configuration settings

2015-12-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9228:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Looks good to me, +1.

I've committed this to HDFS-8707.  Thanks Bob!

> libhdfs++ should respect NN retry configuration settings
> 
>
> Key: HDFS-9228
> URL: https://issues.apache.org/jira/browse/HDFS-9228
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9228.HDFS-8707.001.patch, 
> HDFS-9228.HDFS-8707.002.patch, HDFS-9228.HDFS-8707.003.patch, 
> HDFS-9228.HDFS-8707.004.patch, HDFS-9228.HDFS-8707.005.patch, 
> HDFS-9228.HDFS-8707.006.patch, HDFS-9228.HDFS-8707.007.patch, 
> HDFS-9228.HDFS-8707.008.patch, HDFS-9228.HDFS-8707.009.patch, 
> HDFS-9228.HDFS-8707.010.patch, HDFS-9228.HDFS-8707.011.patch
>
>
> Handle the use case of temporary network or NN hiccups and have a 
> configurable number of retries for NN operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9486) libhdfs++ Fix valgrind failures when using more than 1 io_service worker thread.

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045582#comment-15045582
 ] 

James Clampffer commented on HDFS-9486:
---

I didn't expect this patch to fix the issue with the CI test, it was intended 
to fix a specific issue that showed up in real-world use.  The unfortunately 
the bug showing up in CI seems to be influenced quite a bit by environment; it 
almost never fails on my workstation but always fails in the docker environment.

> libhdfs++ Fix valgrind failures when using more than 1 io_service worker 
> thread.
> 
>
> Key: HDFS-9486
> URL: https://issues.apache.org/jira/browse/HDFS-9486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9486-stacks-sanitized.txt, 
> HDFS-9486.HDFS-8707.000.patch, HDFS-9486.HDFS-8707.001.patch
>
>
> Valgrind catches an invalid read of size 8.  Setup: 4 io_service worker 
> threads, 64 threads doing open-read-close on a small file.
> Stack:
> ==8351== Invalid read of size 8
> ==8351==at 0x51F45C: 
> asio::detail::reactive_socket_recv_op asio::detail::read_op asio::stream_socket_service >, asio::mutable_buffers_1, 
> asio::detail::transfer_all_t, std::_Bind asio::stream_socket_service > >::*)(std::error_code const&, 
> unsigned long)> 
> (hdfs::RpcConnectionImpl asio::stream_socket_service > >*, std::_Placeholder<1>, 
> std::_Placeholder<2>)> > >::do_complete(asio::detail::task_io_service*, 
> asio::detail::task_io_service_operation*, std::error_code const&, unsigned 
> long) (functional:601)
> ==8351==by 0x508B10: hdfs::IoServiceImpl::Run() 
> (task_io_service_operation.hpp:37)
> ==8351==by 0x55BCBEF: ??? (in 
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
> ==8351==by 0x5A2D181: start_thread (pthread_create.c:312)
> ==8351==by 0x5D3D47C: clone (clone.S:111)
> ==8351==  Address 0x67e3eb0 is 0 bytes inside a block of size 216 free'd
> ==8351==at 0x4C2C2BC: operator delete(void*) (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==8351==by 0x51F7B2: 
> hdfs::RpcConnectionImpl asio::stream_socket_service > >::~RpcConnectionImpl() 
> (rpc_connection.h:32)
> ==8351==by 0x50C104: hdfs::FileSystemImpl::~FileSystemImpl() 
> (unique_ptr.h:67)
> ==8351==by 0x503A10: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (unique_ptr.h:67)
> ==8351==by 0x503B28: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (hdfs_cpp.cc:140)
> ==8351==by 0x503580: hdfs_internal::~hdfs_internal() (unique_ptr.h:67)
> ==8351==by 0x502FEE: hdfsDisconnect (hdfs.cc:127)
> ==8351==by 0x5010B7: main (threaded_stress_test.cc:74)
> ==8351== 
> pure virtual method called
> terminate called without an active exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9486) libhdfs++ Fix valgrind failures when using more than 1 io_service worker thread.

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045596#comment-15045596
 ] 

James Clampffer commented on HDFS-9486:
---

Given that this falls into [~bobhansen]'s former case I went ahead and 
committed this.  Thanks for the reviews Bob.

> libhdfs++ Fix valgrind failures when using more than 1 io_service worker 
> thread.
> 
>
> Key: HDFS-9486
> URL: https://issues.apache.org/jira/browse/HDFS-9486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9486-stacks-sanitized.txt, 
> HDFS-9486.HDFS-8707.000.patch, HDFS-9486.HDFS-8707.001.patch
>
>
> Valgrind catches an invalid read of size 8.  Setup: 4 io_service worker 
> threads, 64 threads doing open-read-close on a small file.
> Stack:
> ==8351== Invalid read of size 8
> ==8351==at 0x51F45C: 
> asio::detail::reactive_socket_recv_op asio::detail::read_op asio::stream_socket_service >, asio::mutable_buffers_1, 
> asio::detail::transfer_all_t, std::_Bind asio::stream_socket_service > >::*)(std::error_code const&, 
> unsigned long)> 
> (hdfs::RpcConnectionImpl asio::stream_socket_service > >*, std::_Placeholder<1>, 
> std::_Placeholder<2>)> > >::do_complete(asio::detail::task_io_service*, 
> asio::detail::task_io_service_operation*, std::error_code const&, unsigned 
> long) (functional:601)
> ==8351==by 0x508B10: hdfs::IoServiceImpl::Run() 
> (task_io_service_operation.hpp:37)
> ==8351==by 0x55BCBEF: ??? (in 
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
> ==8351==by 0x5A2D181: start_thread (pthread_create.c:312)
> ==8351==by 0x5D3D47C: clone (clone.S:111)
> ==8351==  Address 0x67e3eb0 is 0 bytes inside a block of size 216 free'd
> ==8351==at 0x4C2C2BC: operator delete(void*) (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==8351==by 0x51F7B2: 
> hdfs::RpcConnectionImpl asio::stream_socket_service > >::~RpcConnectionImpl() 
> (rpc_connection.h:32)
> ==8351==by 0x50C104: hdfs::FileSystemImpl::~FileSystemImpl() 
> (unique_ptr.h:67)
> ==8351==by 0x503A10: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (unique_ptr.h:67)
> ==8351==by 0x503B28: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (hdfs_cpp.cc:140)
> ==8351==by 0x503580: hdfs_internal::~hdfs_internal() (unique_ptr.h:67)
> ==8351==by 0x502FEE: hdfsDisconnect (hdfs.cc:127)
> ==8351==by 0x5010B7: main (threaded_stress_test.cc:74)
> ==8351== 
> pure virtual method called
> terminate called without an active exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9486) libhdfs++ Fix valgrind failures when using more than 1 io_service worker thread.

2015-12-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9486:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++ Fix valgrind failures when using more than 1 io_service worker 
> thread.
> 
>
> Key: HDFS-9486
> URL: https://issues.apache.org/jira/browse/HDFS-9486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9486-stacks-sanitized.txt, 
> HDFS-9486.HDFS-8707.000.patch, HDFS-9486.HDFS-8707.001.patch
>
>
> Valgrind catches an invalid read of size 8.  Setup: 4 io_service worker 
> threads, 64 threads doing open-read-close on a small file.
> Stack:
> ==8351== Invalid read of size 8
> ==8351==at 0x51F45C: 
> asio::detail::reactive_socket_recv_op asio::detail::read_op asio::stream_socket_service >, asio::mutable_buffers_1, 
> asio::detail::transfer_all_t, std::_Bind asio::stream_socket_service > >::*)(std::error_code const&, 
> unsigned long)> 
> (hdfs::RpcConnectionImpl asio::stream_socket_service > >*, std::_Placeholder<1>, 
> std::_Placeholder<2>)> > >::do_complete(asio::detail::task_io_service*, 
> asio::detail::task_io_service_operation*, std::error_code const&, unsigned 
> long) (functional:601)
> ==8351==by 0x508B10: hdfs::IoServiceImpl::Run() 
> (task_io_service_operation.hpp:37)
> ==8351==by 0x55BCBEF: ??? (in 
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
> ==8351==by 0x5A2D181: start_thread (pthread_create.c:312)
> ==8351==by 0x5D3D47C: clone (clone.S:111)
> ==8351==  Address 0x67e3eb0 is 0 bytes inside a block of size 216 free'd
> ==8351==at 0x4C2C2BC: operator delete(void*) (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==8351==by 0x51F7B2: 
> hdfs::RpcConnectionImpl asio::stream_socket_service > >::~RpcConnectionImpl() 
> (rpc_connection.h:32)
> ==8351==by 0x50C104: hdfs::FileSystemImpl::~FileSystemImpl() 
> (unique_ptr.h:67)
> ==8351==by 0x503A10: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (unique_ptr.h:67)
> ==8351==by 0x503B28: hdfs::HadoopFileSystem::~HadoopFileSystem() 
> (hdfs_cpp.cc:140)
> ==8351==by 0x503580: hdfs_internal::~hdfs_internal() (unique_ptr.h:67)
> ==8351==by 0x502FEE: hdfsDisconnect (hdfs.cc:127)
> ==8351==by 0x5010B7: main (threaded_stress_test.cc:74)
> ==8351== 
> pure virtual method called
> terminate called without an active exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9464) Documentation needs to be exposed

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045607#comment-15045607
 ] 

James Clampffer commented on HDFS-9464:
---

Thanks for the info [~aw]!  I'm hoping to get to this in the next week or two.  
It will certainly be fixed up before requesting to merge into trunk.

> Documentation needs to be exposed
> -
>
> Key: HDFS-9464
> URL: https://issues.apache.org/jira/browse/HDFS-9464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From the few builds I've done, there doesn't appear to be any user-facing 
> documentation that is actually exposed when mvn site is built.  HDFS-8745 
> allegedly added doxygen support, but even those docs aren't tied into the 
> docs and/or site build. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9464) Documentation needs to be exposed

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045606#comment-15045606
 ] 

James Clampffer commented on HDFS-9464:
---

Thanks for the info [~aw]!  I'm hoping to get to this in the next week or two.  
It will certainly be fixed up before requesting to merge into trunk.

> Documentation needs to be exposed
> -
>
> Key: HDFS-9464
> URL: https://issues.apache.org/jira/browse/HDFS-9464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From the few builds I've done, there doesn't appear to be any user-facing 
> documentation that is actually exposed when mvn site is built.  HDFS-8745 
> allegedly added doxygen support, but even those docs aren't tied into the 
> docs and/or site build. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045624#comment-15045624
 ] 

Hadoop QA commented on HDFS-9515:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
36s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
49s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 55s 
{color} | {color:red} root-jdk1.8.0_66 with JDK v1.8.0_66 generated 3 new 
issues (was 752, now 752). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
29s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 24s 
{color} | {color:red} root-jdk1.7.0_91 with JDK v1.7.0_91 generated 2 new 
issues (was 746, now 746). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 44s {color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 39s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 55s {color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 29s 
{color} | 

[jira] [Commented] (HDFS-9465) No header files in mvn package

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045689#comment-15045689
 ] 

James Clampffer commented on HDFS-9465:
---

Thanks for the help [~aw]!


"IIRC, I built on OS X after hacking the hell out of the CMakeFiles to point to 
a working OpenSSL install (HDFS-9416)."

-Sorry about the OS X issues.  I don't think anyone spending a significant 
amount of time on this at the moment is using OS X so I can't say I'm shocked 
that it's breaking.  I'll need to borrow someone's mac at some point and 
attempt to sort that stuff out.  If you happen to have a snapshot of what you 
changed, even if it's a complete hack, please post it.  Myself, or (hopefully) 
someone who knows how to use a mac can use that as a starting point.

"I think they need to end up in target/usr/local/include, but I'd need to 
double check."

That makes sense, I'm still figuring out the maven packaging and distribution 
system.  I'm guessing the easiest thing for me to do would be to look at where 
the libhdfs headers end up in a distribution snapshot.

"Yes. That's exactly why this is a blocker.  The distro already has an include/ 
dir so just need for the appropriate content to show up there."

Makes a lot of sense to me.  Will certainly get that working.  I unassigned 
myself from this jira just in case someone wants to jump on while I'm working 
on memory stomps.  If that doesn't happen I'll pick this up before attempting 
to merge to trunk.

"It needs to be whatever header files are necessary for a user to actually use 
the library."

And today I found out I've been doing this wrong the whole time :).  Last time 
I used libhdfs I manually pulled out the headers and stuck them in my project.  
I figured there was a sane way to do it.



> No header files in mvn package
> --
>
> Key: HDFS-9465
> URL: https://issues.apache.org/jira/browse/HDFS-9465
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> The current build appears to only include the shared library and no header 
> files to actually use the library in the final maven binary build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9441) Do not construct path string when choosing block placement targets

2015-12-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045701#comment-15045701
 ] 

Daryn Sharp commented on HDFS-9441:
---

We should either pass a BlockCollection that might be null for webhdfs, or just 
remove the argument entirely.  Using Object that is either a String or a 
BlockCollection, and relying on BlockCollection.toString() to return a path, 
will cement toString() as a formal API for placement.  That makes me uneasy.

The plugin interface is marked as private, none of the provided implementations 
use it, so I'd vote to remove it.

> Do not construct path string when choosing block placement targets
> --
>
> Key: HDFS-9441
> URL: https://issues.apache.org/jira/browse/HDFS-9441
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9441_20151118.patch, h9441_20151119.patch
>
>
> - INodeFile.getName() is expensive since it involves quite a few string 
> operations.  The method is called in both ReplicationWork and 
> ErasureCodingWork but the default BlockPlacementPolicy does not use the 
> returned string.  We should simply pass BlockCollection to reduce unnecessary 
> computation when using the default BlockPlacementPolicy.
> - Another improvement: the return type of FSNamesystem.getBlockCollection 
> should be changed to INodeFile since it always returns an INodeFile object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9458) TestBackupNode always binds to port 50070, which can cause bind failures.

2015-12-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9458:

Assignee: Xiao Chen  (was: Xiaobing Zhou)

> TestBackupNode always binds to port 50070, which can cause bind failures.
> -
>
> Key: HDFS-9458
> URL: https://issues.apache.org/jira/browse/HDFS-9458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>
> {{TestBackupNode}} does not override port settings to use a dynamically 
> selected port for the NameNode HTTP server.  It uses the default of 50070 
> defined in hdfs-default.xml.  This should be changed to select a dynamic port 
> to avoid bind errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9489) Enable CI infrasructure to use libhdfs++ hdfsRead

2015-12-07 Thread Stephen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen updated HDFS-9489:
--
Status: Patch Available  (was: Open)

> Enable CI infrasructure to use libhdfs++ hdfsRead
> -
>
> Key: HDFS-9489
> URL: https://issues.apache.org/jira/browse/HDFS-9489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Stephen
> Attachments: HDFS-9489.HDFS-8707.001.patch
>
>
> CI tests are built against a shim layer that delegates work to libhdfs or 
> libhdfs++.  Now that stateful reads are available to libhdfs++ the CI system 
> should delegate hdfsRead to libhdfs++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9489) Enable CI infrasructure to use libhdfs++ hdfsRead

2015-12-07 Thread Stephen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen updated HDFS-9489:
--
Attachment: HDFS-9489.HDFS-8707.001.patch

Test libhdfspp's implementation of hdfsRead.

> Enable CI infrasructure to use libhdfs++ hdfsRead
> -
>
> Key: HDFS-9489
> URL: https://issues.apache.org/jira/browse/HDFS-9489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Stephen
> Attachments: HDFS-9489.HDFS-8707.001.patch
>
>
> CI tests are built against a shim layer that delegates work to libhdfs or 
> libhdfs++.  Now that stateful reads are available to libhdfs++ the CI system 
> should delegate hdfsRead to libhdfs++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9518) LeaseRenewer - Improve Renew Method

2015-12-07 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-9518:
-

 Summary: LeaseRenewer - Improve Renew Method
 Key: HDFS-9518
 URL: https://issues.apache.org/jira/browse/HDFS-9518
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.1
Reporter: BELUGA BEHR
Priority: Minor


Replaced the current implementation of 
org.apache.hadoop.hdfs.client.impl.LeaseRenewer#renew()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9518) LeaseRenewer - Improve Renew Method

2015-12-07 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-9518:
--
Attachment: LeaseRenewer.patch

> LeaseRenewer - Improve Renew Method
> ---
>
> Key: HDFS-9518
> URL: https://issues.apache.org/jira/browse/HDFS-9518
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: LeaseRenewer.patch
>
>
> Replaced the current implementation of 
> org.apache.hadoop.hdfs.client.impl.LeaseRenewer#renew()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045809#comment-15045809
 ] 

Jing Zhao commented on HDFS-9129:
-

+1 for the branch-2 patch. I've committed it into branch-2 as well.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9129:

Fix Version/s: (was: 3.0.0)

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9129:

Fix Version/s: 2.9.0

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9129:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045815#comment-15045815
 ] 

Mingliang Liu commented on HDFS-9129:
-

Thanks for review and commit the {{branch-2}} patch, [~jingzhao].

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045842#comment-15045842
 ] 

Hudson commented on HDFS-9129:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8936 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8936/])
Move HDFS-9129 from trunk to branch-2.9.0 (jing9: rev 
7fa9ea85d47dec1702f113151eb437d5e3155e75)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9460) libhdfs++: suppress warnings from third-party libraries

2015-12-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045873#comment-15045873
 ] 

James Clampffer commented on HDFS-9460:
---

Looks good to me.  Would you mind rebasing it?  It doesn't apply to head 
anymore and I don't have the same compiler warnings so I wouldn't be sure if I 
got it right.

> libhdfs++: suppress warnings from third-party libraries
> ---
>
> Key: HDFS-9460
> URL: https://issues.apache.org/jira/browse/HDFS-9460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9460.HDFS-8707.001.patch, 
> HDFS-9460.HDFS-8707.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9489) Enable CI infrasructure to use libhdfs++ hdfsRead

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045876#comment-15045876
 ] 

Hadoop QA commented on HDFS-9489:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
58s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 34s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 31s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 4s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 2s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_85. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 427 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12776179/HDFS-9489.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9489 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 6d868d6aaec2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 894e962 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13796/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13796/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_85.txt
 |
| JDK v1.7.0_85  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13796/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13796/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 75MB |
| Powered by | Apache Yetushttp://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13796/console |



[jira] [Updated] (HDFS-9414) Refactor reconfiguration of ClientDatanodeProtocol for reusability

2015-12-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9414:

Attachment: HDFS-9414.004.patch

> Refactor reconfiguration of ClientDatanodeProtocol for reusability
> --
>
> Key: HDFS-9414
> URL: https://issues.apache.org/jira/browse/HDFS-9414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-9414.001.patch, HDFS-9414.002.patch, 
> HDFS-9414.003.patch, HDFS-9414.004.patch
>
>
> Since reconfiguration is reused by both DataNode and NameNode, this work 
> proposes to refactor that part in ClientDatanodeProtocol to be reused by 
> dedicated ReconfigurationProtocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9414) Refactor reconfiguration of ClientDatanodeProtocol for reusability

2015-12-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045880#comment-15045880
 ] 

Xiaobing Zhou commented on HDFS-9414:
-

Thanks [~Naganarasimha] and [~arpitagarwal]. V004 only fixed the checkstyle 
issues.

> Refactor reconfiguration of ClientDatanodeProtocol for reusability
> --
>
> Key: HDFS-9414
> URL: https://issues.apache.org/jira/browse/HDFS-9414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-9414.001.patch, HDFS-9414.002.patch, 
> HDFS-9414.003.patch, HDFS-9414.004.patch
>
>
> Since reconfiguration is reused by both DataNode and NameNode, this work 
> proposes to refactor that part in ClientDatanodeProtocol to be reused by 
> dedicated ReconfigurationProtocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9414) Refactor reconfiguration of ClientDatanodeProtocol for reusability

2015-12-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9414:

Status: Patch Available  (was: Reopened)

> Refactor reconfiguration of ClientDatanodeProtocol for reusability
> --
>
> Key: HDFS-9414
> URL: https://issues.apache.org/jira/browse/HDFS-9414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-9414.001.patch, HDFS-9414.002.patch, 
> HDFS-9414.003.patch, HDFS-9414.004.patch
>
>
> Since reconfiguration is reused by both DataNode and NameNode, this work 
> proposes to refactor that part in ClientDatanodeProtocol to be reused by 
> dedicated ReconfigurationProtocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2015-12-07 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045900#comment-15045900
 ] 

Chris Trezzo commented on HDFS-8578:


One possible solution to limit memory growth is to rewrite 
{{DataStorage#linkBlocks}} and {{DataStorage#linkBlocksHelper}} to use a 
producer/consumer type model with a bounded queue. For example, you could use a 
[LinkedBlockingQueue|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/LinkedBlockingQueue.html]
 with a fixed capacity. The logic roughly in the 
{{DataStorage#linkBlocksHelper}} method would be the producer that adds 
LinkArgs objects to the queue. The logic in the linkWorkers ExecutorService 
would simply do what it does now, except it would pull LinkArgs objects out of 
the queue.

[~vinayrpet] Do you want to take a crack at this? If you don't have time in the 
next day or so let me know and I will take a look.

> On upgrade, Datanode should process all storage/data dirs in parallel
> -
>
> Key: HDFS-8578
> URL: https://issues.apache.org/jira/browse/HDFS-8578
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Raju Bairishetti
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, 
> HDFS-8578-03.patch, HDFS-8578-04.patch, HDFS-8578-05.patch, 
> HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, 
> HDFS-8578-09.patch, HDFS-8578-10.patch, HDFS-8578-11.patch, 
> HDFS-8578-12.patch, HDFS-8578-branch-2.6.0.patch
>
>
> Right now, during upgrades datanode is processing all the storage dirs 
> sequentially. Assume it takes ~20 mins to process a single storage dir then  
> datanode which has ~10 disks will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>   assert getCTime() == nsInfo.getCTime() 
>   : "Data-node and name-node CTimes must be the same.";
> }
> {code}
> It would save lots of time during major upgrades if datanode process all 
> storagedirs/disks parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9441) Do not construct path string when choosing block placement targets

2015-12-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045904#comment-15045904
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9441:
---

> ... relying on BlockCollection.toString() to return a path, will cement 
> toString() as a formal API for placement. ...

We are not going to change toString(), which currently returns the local name 
for INode, to return full path.  The block placement implementations need to 
call getName() if they want to get the full path of a BlockCollection.

A better API is to change the type of src to a new interface (let's call it 
GetName for this discussion).
{code}
  interface GetName {
String getName();
  }
{code}
Then, we may pass String and BlockCollection by creating anonymous class 
objects, i.e.
- String
{code}
  String src;
  GetName name = new GetName() {
@Override
public String getName() {
  return src;
}
  };
{code}
- BlockCollection
{code}
  BlockCollection bc;
  GetName name = new GetName() {
@Override
public String getName() {
  return bc.getName();
}
  };
{code}

However, it seems an overdesign.

> Do not construct path string when choosing block placement targets
> --
>
> Key: HDFS-9441
> URL: https://issues.apache.org/jira/browse/HDFS-9441
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9441_20151118.patch, h9441_20151119.patch
>
>
> - INodeFile.getName() is expensive since it involves quite a few string 
> operations.  The method is called in both ReplicationWork and 
> ErasureCodingWork but the default BlockPlacementPolicy does not use the 
> returned string.  We should simply pass BlockCollection to reduce unnecessary 
> computation when using the default BlockPlacementPolicy.
> - Another improvement: the return type of FSNamesystem.getBlockCollection 
> should be changed to INodeFile since it always returns an INodeFile object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9034) "StorageTypeStats" Metric should not count failed storage.

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045994#comment-15045994
 ] 

Hadoop QA commented on HDFS-9034:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 81, now 81). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 110m 14s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 5s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 244m 4s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDFSClientFailover |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestBackupNode |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | 

[jira] [Updated] (HDFS-9517) Make TestDistCpUtils.testUnpackAttributes testable

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9517:
--
Status: Patch Available  (was: Open)

> Make TestDistCpUtils.testUnpackAttributes testable
> --
>
> Key: HDFS-9517
> URL: https://issues.apache.org/jira/browse/HDFS-9517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9517.001.patch
>
>
> testUnpackAttributes() test method in TestDistCpUtils does not have @Test 
> annotation and is not testable.
> I searched around and saw no discussion it was omitted, so I assume it was 
> just unintentional.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9517) Make TestDistCpUtils.testUnpackAttributes testable

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9517:
--
Attachment: HDFS-9517.001.patch

Rev01. added @Test annotation. Test passed locally.

> Make TestDistCpUtils.testUnpackAttributes testable
> --
>
> Key: HDFS-9517
> URL: https://issues.apache.org/jira/browse/HDFS-9517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9517.001.patch
>
>
> testUnpackAttributes() test method in TestDistCpUtils does not have @Test 
> annotation and is not testable.
> I searched around and saw no discussion it was omitted, so I assume it was 
> just unintentional.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9519) Some coding improvement in SecondaryNameNode#main

2015-12-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HDFS-9519:
---

Assignee: Xiao Chen

> Some coding improvement in SecondaryNameNode#main
> -
>
> Key: HDFS-9519
> URL: https://issues.apache.org/jira/browse/HDFS-9519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>
> Two nits:
> # The checking whether secondary is null is not necessary in the following 
> code in SecondaryNameNode.java. 
> # The comment in this code seems to imply that "when secondary is not null, 
> SNN was stared as a daemon.", and this is not true. Suggest to improve the 
> comment to make it clear,
> Assign to Xiao since he worked on HDFS-3059. Thanks Xiao.
> {code}
>   if (secondary != null) {
> // The web server is only needed when starting SNN as a daemon,
> // and not needed if called from shell command. Starting the web 
> server
> // from shell may fail when getting credentials, if the environment
> // is not set up for it, which is most of the case.
> secondary.startInfoServer();
> secondary.startCheckpointThread();
> secondary.join();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9519) Some coding improvement in SecondaryNameNode#main

2015-12-07 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-9519:
---

 Summary: Some coding improvement in SecondaryNameNode#main
 Key: HDFS-9519
 URL: https://issues.apache.org/jira/browse/HDFS-9519
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Yongjun Zhang


Two nits:

# The checking whether secondary is null is not necessary in the following code 
in SecondaryNameNode.java. 
# The comment in this code seems to imply that "when secondary is not null, SNN 
was stared as a daemon.", and this is not true. Suggest to improve the comment 
to make it clear,

Assign to Xiao since he worked on HDFS-3059. Thanks Xiao.

{code}
  if (secondary != null) {
// The web server is only needed when starting SNN as a daemon,
// and not needed if called from shell command. Starting the web server
// from shell may fail when getting credentials, if the environment
// is not set up for it, which is most of the case.
secondary.startInfoServer();

secondary.startCheckpointThread();
secondary.join();
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9519) Some coding improvement in SecondaryNameNode#main

2015-12-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046001#comment-15046001
 ] 

Xiao Chen commented on HDFS-9519:
-

Thanks for reporting this Yongjun, I'll work on this soon.

> Some coding improvement in SecondaryNameNode#main
> -
>
> Key: HDFS-9519
> URL: https://issues.apache.org/jira/browse/HDFS-9519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>
> Two nits:
> # The checking whether secondary is null is not necessary in the following 
> code in SecondaryNameNode.java. 
> # The comment in this code seems to imply that "when secondary is not null, 
> SNN was stared as a daemon.", and this is not true. Suggest to improve the 
> comment to make it clear,
> Assign to Xiao since he worked on HDFS-3059. Thanks Xiao.
> {code}
>   if (secondary != null) {
> // The web server is only needed when starting SNN as a daemon,
> // and not needed if called from shell command. Starting the web 
> server
> // from shell may fail when getting credentials, if the environment
> // is not set up for it, which is most of the case.
> secondary.startInfoServer();
> secondary.startCheckpointThread();
> secondary.join();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9519) Some coding improvement in SecondaryNameNode#main

2015-12-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046030#comment-15046030
 ] 

Mingliang Liu commented on HDFS-9519:
-

Thanks for reporting this, [~yzhangal]. The condition check is always true and 
the comment is confusing.

> Some coding improvement in SecondaryNameNode#main
> -
>
> Key: HDFS-9519
> URL: https://issues.apache.org/jira/browse/HDFS-9519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>
> Two nits:
> # The checking whether secondary is null is not necessary in the following 
> code in SecondaryNameNode.java. 
> # The comment in this code seems to imply that "when secondary is not null, 
> SNN was stared as a daemon.", and this is not true. Suggest to improve the 
> comment to make it clear,
> Assign to Xiao since he worked on HDFS-3059. Thanks Xiao.
> {code}
>   if (secondary != null) {
> // The web server is only needed when starting SNN as a daemon,
> // and not needed if called from shell command. Starting the web 
> server
> // from shell may fail when getting credentials, if the environment
> // is not set up for it, which is most of the case.
> secondary.startInfoServer();
> secondary.startCheckpointThread();
> secondary.join();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2015-12-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046042#comment-15046042
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9513:
---

Thanks for reporting the problem and working on it.

For commitBlockSynchronization and getAdditionalDatanode, NN has storageIDs 
stored in BlockUnderConstructionFeature.  So we should be able to get 
storageIDs from there.

For updatePipeline, NN has storageIDs stored in BlockUnderConstructionFeature 
except the newly added storageID.  We may choose a random storageID or the 
first storageID for this case.


> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: 邓飞
>Assignee: 邓飞
>Priority: Blocker
> Attachments: patch.HDFS-9513.20151207
>
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9517) Make TestDistCpUtils.testUnpackAttributes testable

2015-12-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9517:
--
Priority: Trivial  (was: Minor)

> Make TestDistCpUtils.testUnpackAttributes testable
> --
>
> Key: HDFS-9517
> URL: https://issues.apache.org/jira/browse/HDFS-9517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HDFS-9517.001.patch
>
>
> testUnpackAttributes() test method in TestDistCpUtils does not have @Test 
> annotation and is not testable.
> I searched around and saw no discussion it was omitted, so I assume it was 
> just unintentional.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9512) TestBackupNode flakes with port in use error on 50070

2015-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9512:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Hi [~xiaochen].  This test failure is tracked in HDFS-9458, so I'm resolving 
this one as a duplicate.

> TestBackupNode flakes with port in use error on 50070
> -
>
> Key: HDFS-9512
> URL: https://issues.apache.org/jira/browse/HDFS-9512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9512.01.patch
>
>
> HDFS-5948 fixed one of these port in use failures. But just met another one 
> with the following stack trace:
> Error Message
> {noformat}
> Port in use: 0.0.0.0:50070
> {noformat}
> Stacktrace
> {noformat}
> java.net.BindException: Port in use: 0.0.0.0:50070
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:942)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:883)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:774)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:663)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:838)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:817)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1522)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNodeWithIncorrectAuthentication(TestBackupNode.java:165)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9517) Make TestDistCpUtils.testUnpackAttributes testable

2015-12-07 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9517:
-

 Summary: Make TestDistCpUtils.testUnpackAttributes testable
 Key: HDFS-9517
 URL: https://issues.apache.org/jira/browse/HDFS-9517
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Affects Versions: 3.0.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


testUnpackAttributes() test method in TestDistCpUtils does not have @Test 
annotation and is not testable.

I searched around and saw no discussion it was omitted, so I assume it was just 
unintentional.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9510) FsVolume should add the operation of creating file's time metrics

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046312#comment-15046312
 ] 

Hadoop QA commented on HDFS-9510:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 42s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 36s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 18 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 520, now 537). {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 36s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 0s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12776244/HDFS-9510.004.patch |
| JIRA Issue | HDFS-9510 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a6b9465d3227 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Updated] (HDFS-9262) Support reconfiguring dfs.datanode.lazywriter.interval.sec without DN restart

2015-12-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9262:

Summary: Support reconfiguring dfs.datanode.lazywriter.interval.sec without 
DN restart  (was: Reconfigure DN lazy writer interval on the fly)

> Support reconfiguring dfs.datanode.lazywriter.interval.sec without DN restart
> -
>
> Key: HDFS-9262
> URL: https://issues.apache.org/jira/browse/HDFS-9262
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9262.001.patch
>
>
> This is to reconfigure
> {code}
> dfs.datanode.lazywriter.interval.sec
> {code}
> without restarting DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9442) Move block replication logic from BlockManager to a new class ReplicationManager

2015-12-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9442:

Attachment: HDFS-9442.008.patch

The v8 patch is to move the {{CorruptReplicasMap}} to {{ReplicationManager}}.

> Move block replication logic from BlockManager to a new class 
> ReplicationManager
> 
>
> Key: HDFS-9442
> URL: https://issues.apache.org/jira/browse/HDFS-9442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9442.000.patch, HDFS-9442.001.patch, 
> HDFS-9442.002.patch, HDFS-9442.003.patch, HDFS-9442.004.patch, 
> HDFS-9442.005.patch, HDFS-9442.006.patch, HDFS-9442.007.patch, 
> HDFS-9442.008.patch
>
>
> Currently the {{BlockManager}} is managing all replication logic for over- , 
> under- and mis-replicated blocks. This jira proposes to move that code to a 
> new class named {{ReplicationManager}} for cleaner code logic, shorter source 
> files, and easier lock separating work in future.
> The {{ReplicationManager}} is a package local class, providing 
> {{BlockManager}} with methods that accesses its internal data structures of 
> replication queue. Meanwhile, the class maintains the lifecycle of 
> {{replicationThread}} and {{replicationQueuesInitializer}} daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9510) FsVolume should add the operation of creating file's time metrics

2015-12-07 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9510:

Attachment: HDFS-9510.004.patch

Reslove the compile error and FsVolumeMetrics register error.

> FsVolume should add the operation of creating file's time metrics
> -
>
> Key: HDFS-9510
> URL: https://issues.apache.org/jira/browse/HDFS-9510
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, fs
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9510.001.patch, HDFS-9510.002.patch, 
> HDFS-9510.003.patch, HDFS-9510.004.patch
>
>
> For one datanode, this datanode may be have not only one data directorys. And 
> each dataDir has correspond to a FsVolums. And in some time, the one of these 
> dataDirs being created files or dirs slowly because of hardware problems. 
> What's more, it will influence the whole node. So we need to monitor these 
> slow-writing disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9519) Some coding improvement in SecondaryNameNode#main

2015-12-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046309#comment-15046309
 ] 

Allen Wittenauer commented on HDFS-9519:


If the secondary namenode isn't actually configured but someone tries to start 
the 2nn, what happens?  Also, does Checkpoint and Backup have different entry 
points or is this used for those too?

> Some coding improvement in SecondaryNameNode#main
> -
>
> Key: HDFS-9519
> URL: https://issues.apache.org/jira/browse/HDFS-9519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>
> Two nits:
> # The checking whether secondary is null is not necessary in the following 
> code in SecondaryNameNode.java. 
> # The comment in this code seems to imply that "when secondary is not null, 
> SNN was stared as a daemon.", and this is not true. Suggest to improve the 
> comment to make it clear,
> Assign to Xiao since he worked on HDFS-3059. Thanks Xiao.
> {code}
>   if (secondary != null) {
> // The web server is only needed when starting SNN as a daemon,
> // and not needed if called from shell command. Starting the web 
> server
> // from shell may fail when getting credentials, if the environment
> // is not set up for it, which is most of the case.
> secondary.startInfoServer();
> secondary.startCheckpointThread();
> secondary.join();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9441) Do not construct path string when choosing block placement targets

2015-12-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046141#comment-15046141
 ] 

Mingliang Liu commented on HDFS-9441:
-

+1 (non-binding) to remove it.

> Do not construct path string when choosing block placement targets
> --
>
> Key: HDFS-9441
> URL: https://issues.apache.org/jira/browse/HDFS-9441
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9441_20151118.patch, h9441_20151119.patch
>
>
> - INodeFile.getName() is expensive since it involves quite a few string 
> operations.  The method is called in both ReplicationWork and 
> ErasureCodingWork but the default BlockPlacementPolicy does not use the 
> returned string.  We should simply pass BlockCollection to reduce unnecessary 
> computation when using the default BlockPlacementPolicy.
> - Another improvement: the return type of FSNamesystem.getBlockCollection 
> should be changed to INodeFile since it always returns an INodeFile object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046155#comment-15046155
 ] 

Hudson commented on HDFS-9129:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #674 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/674/])
Move HDFS-9129 from trunk to branch-2.9.0 (jing9: rev 
7fa9ea85d47dec1702f113151eb437d5e3155e75)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9129-branch-2.025.patch, HDFS-9129.000.patch, 
> HDFS-9129.001.patch, HDFS-9129.002.patch, HDFS-9129.003.patch, 
> HDFS-9129.004.patch, HDFS-9129.005.patch, HDFS-9129.006.patch, 
> HDFS-9129.007.patch, HDFS-9129.008.patch, HDFS-9129.009.patch, 
> HDFS-9129.010.patch, HDFS-9129.011.patch, HDFS-9129.012.patch, 
> HDFS-9129.013.patch, HDFS-9129.014.patch, HDFS-9129.015.patch, 
> HDFS-9129.016.patch, HDFS-9129.017.patch, HDFS-9129.018.patch, 
> HDFS-9129.019.patch, HDFS-9129.020.patch, HDFS-9129.021.patch, 
> HDFS-9129.022.patch, HDFS-9129.023.patch, HDFS-9129.024.patch, 
> HDFS-9129.025.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9414) Refactor reconfiguration of ClientDatanodeProtocol for reusability

2015-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046123#comment-15046123
 ] 

Hadoop QA commented on HDFS-9414:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 59s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 174m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | hadoop.hdfs.server.namenode.TestBackupNode |
|   | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
|   | 

[jira] [Updated] (HDFS-9510) FsVolume should add the operation of creating file's time metrics

2015-12-07 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9510:

Attachment: (was: HDFS-9510.004.patch)

> FsVolume should add the operation of creating file's time metrics
> -
>
> Key: HDFS-9510
> URL: https://issues.apache.org/jira/browse/HDFS-9510
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, fs
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9510.001.patch, HDFS-9510.002.patch, 
> HDFS-9510.003.patch, HDFS-9510.004.patch
>
>
> For one datanode, this datanode may be have not only one data directorys. And 
> each dataDir has correspond to a FsVolums. And in some time, the one of these 
> dataDirs being created files or dirs slowly because of hardware problems. 
> What's more, it will influence the whole node. So we need to monitor these 
> slow-writing disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9510) FsVolume should add the operation of creating file's time metrics

2015-12-07 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9510:

Attachment: HDFS-9510.004.patch

> FsVolume should add the operation of creating file's time metrics
> -
>
> Key: HDFS-9510
> URL: https://issues.apache.org/jira/browse/HDFS-9510
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, fs
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9510.001.patch, HDFS-9510.002.patch, 
> HDFS-9510.003.patch, HDFS-9510.004.patch
>
>
> For one datanode, this datanode may be have not only one data directorys. And 
> each dataDir has correspond to a FsVolums. And in some time, the one of these 
> dataDirs being created files or dirs slowly because of hardware problems. 
> What's more, it will influence the whole node. So we need to monitor these 
> slow-writing disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)