[jira] [Commented] (HDFS-9121) Remove unnecessary "+" sysmbol from BlockManager log.

2015-09-24 Thread Ranga Swamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905875#comment-14905875
 ] 

Ranga Swamy commented on HDFS-9121:
---

HI Rushabh S Shah   

   Thanks for your comment , i  attached a patch addressing your 
comment.
   

> Remove unnecessary "+" sysmbol from BlockManager log.
> -
>
> Key: HDFS-9121
> URL: https://issues.apache.org/jira/browse/HDFS-9121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Ranga Swamy
>Assignee: Ranga Swamy
>Priority: Minor
> Attachments: HDFS-9121.01.patch, HDFS-9121.patch
>
>
> Remove unnecessary "+" sysmbol from BlockManager log.
> {code}
> 2015-08-18 15:34:14,016 | INFO | IPC Server handler 12 on 25000 | BLOCK* 
> processOverReplicatedBlock: Postponing processing of over-replicated 
> blk_1075396202_1655682 since storage + 
> [DISK]DS-41c1b969-a3f9-48ff-8c76-6fea0152950c:NORMAL:160.149.0.113:25009datanode
>  160.149.0.113:25009 does not yet have up-to-date block information. | 
> BlockManager.java:2906
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9128) TestWebHdfsFileContextMainOperations and TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on Windows.

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905873#comment-14905873
 ] 

Hudson commented on HDFS-9128:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2376 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2376/])
HDFS-9128. TestWebHdfsFileContextMainOperations and 
TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on Windows. 
Contributed by Chris Nauroth. (wheat9: rev 
06d1c9033effcd2b1ea54e87229d5478d85732ca)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestWebHdfsFileContextMainOperations.java


> TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on 
> Windows.
> 
>
> Key: HDFS-9128
> URL: https://issues.apache.org/jira/browse/HDFS-9128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9128.001.patch
>
>
> These tests do not override the default behavior of using the local file 
> system test working directory to construct test paths.  These paths will 
> contain the ':' character on Windows due to the drive spec.  HDFS rejects the 
> ':' character as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9126) namenode crash in fsimage download/transfer

2015-09-24 Thread zengyongping (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zengyongping updated HDFS-9126:
---
Description: 
In our product Hadoop cluster,when active namenode begin download/transfer 
fsimage from standby namenode.some times zkfc monitor health of NameNode socket 
timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING ,happen hadoop 
namenode ha failover,fence old active namenode.

zkfc logs:
2015-09-24 11:44:44,739 WARN org.apache.hadoop.ha.HealthMonitor: 
Transport-level exception trying to monitor health of NameNode at 
hostname1/192.168.10.11:8020: Call From hostname1/192.168.10.11 to 
hostname1:8020 failed on socket timeout exception: 
java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/192.168.10.11:22614 remote=hostname1/192.168.10.11:8020]; For more 
details see:  http://wiki.apache.org/hadoop/SocketTimeout
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.HealthMonitor: Entering state 
SERVICE_NOT_RESPONDING
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: Local 
service NameNode at hostname1/192.168.10.11:8020 entered state: 
SERVICE_NOT_RESPONDING
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: 
Quitting master election for NameNode at hostname1/192.168.10.11:8020 and 
marking that fencing is necessary
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Yielding from election
2015-09-24 11:44:44,761 INFO org.apache.zookeeper.ZooKeeper: Session: 
0x54d81348fe503e3 closed
2015-09-24 11:44:44,761 WARN org.apache.hadoop.ha.ActiveStandbyElector: 
Ignoring stale result from old client with sessionId 0x54d81348fe503e3
2015-09-24 11:44:44,764 INFO org.apache.zookeeper.ClientCnxn: EventThread shut 
down

namenode logs:
2015-09-24 11:43:34,074 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
192.168.10.12
2015-09-24 11:43:34,074 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Rolling edit logs
2015-09-24 11:43:34,075 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Ending log segment 2317430129
2015-09-24 11:43:34,253 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 272988 Total time for transactions(ms): 5502 Number of 
transactions batched in Syncs: 146274 Number of syncs: 32375 SyncTimes(ms): 
274465 319599
2015-09-24 11:43:46,005 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
Rescanning after 3 milliseconds
2015-09-24 11:44:21,054 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
PendingReplicationMonitor timed out blk_1185804191_112164210
2015-09-24 11:44:36,076 INFO 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
file 
/software/data/hadoop-data/hdfs/namenode/current/edits_inprogress_02317430129
 -> 
/software/data/hadoop-data/hdfs/namenode/current/edits_02317430129-02317703116
2015-09-24 11:44:36,077 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 2317703117
2015-09-24 11:45:38,008 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 1 Total time for transactions(ms): 0 Number of 
transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 61585
2015-09-24 11:45:38,009 INFO 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 222.88s 
at 63510.29 KB/s
2015-09-24 11:45:38,009 INFO 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file 
fsimage.ckpt_02317430128 size 14495092105 bytes.
2015-09-24 11:45:38,416 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal 
192.168.10.13:8485 failed to write txns 2317703117-2317703117. Will try to 
write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 44 is 
less than the last promised epoch 45
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:442)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:342)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:148)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at 

[jira] [Commented] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-09-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905907#comment-14905907
 ] 

Zhe Zhang commented on HDFS-9079:
-

Thanks Jing and Walter for the feedback.

Sorry I didn't make it clear. This change is indeed intended to be made on top 
of HDFS-9040. Since HDFS-9040 patch is not finalized I just took relevant parts 
from it when creating this proof-of-concept patch. I plan to rebase it after we 
commit HDFS-9040.

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9079.00.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped 
> block group ({{FSN#createNewBlock}}). For each new striped block group we can 
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can 
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we 
> shouldn't try to further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-09-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9079:

Attachment: (was: HDFS-9079.00.patch)

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped 
> block group ({{FSN#createNewBlock}}). For each new striped block group we can 
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can 
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we 
> shouldn't try to further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9131) Move hadoop-hdfs-client related config keys from DFSConfigKeys to HdfsClientConfigKeys

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905941#comment-14905941
 ] 

Haohui Mai commented on HDFS-9131:
--

+1. Ran all the failed unit tests, all of them passed locally. Committing.

> Move hadoop-hdfs-client related config keys from DFSConfigKeys to 
> HdfsClientConfigKeys
> --
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905958#comment-14905958
 ] 

Rakesh R commented on HDFS-7529:


Thanks [~wheat9] for the advice. Attached another patch including the changes:
bq. Remove the assertion on holding the FSNamesystem lock
Done.
bq. Introduce a new getting method in FSDirectory to get the 
KeyProviderCryptoExtension object, or just move it inside FSDirectory.
Done. Added FSDirectory#getProvider() function.

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906048#comment-14906048
 ] 

Hudson commented on HDFS-9131:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #433 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/433/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906070#comment-14906070
 ] 

Hadoop QA commented on HDFS-9079:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 28s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 16s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 45s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 10s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 54s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 29s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 40s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  74m 27s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 36s | Tests passed in 
hadoop-hdfs-client. |
| | | 127m 48s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | 
hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancer |
| Timed out tests | org.apache.hadoop.hdfs.TestDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762065/HDFS-9079-HDFS-7285.00.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / c09dc25 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12657/console |


This message was automatically generated.

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9079-HDFS-7285.00.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped 
> block group ({{FSN#createNewBlock}}). For each new striped block group we can 
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can 
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we 
> shouldn't try to further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906173#comment-14906173
 ] 

Hudson commented on HDFS-9131:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1173 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1173/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906174#comment-14906174
 ] 

Hudson commented on HDFS-9130:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1173 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1173/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestClientProtocolWithDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogRace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInFileContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTransferRbw.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInDFS.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/test/TestFuseDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 

[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906260#comment-14906260
 ] 

Surendra Singh Lilhore commented on HDFS-9076:
--

Failed test cases are unrelated..

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906195#comment-14906195
 ] 

Hudson commented on HDFS-9130:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2378 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2378/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInFileContext.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/test/TestFuseDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 

[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906194#comment-14906194
 ] 

Hudson commented on HDFS-9131:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2378 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2378/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906120#comment-14906120
 ] 

Hadoop QA commented on HDFS-9131:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 47s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 32s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 29s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 59s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 194m 25s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 246m 21s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.namenode.TestFSNamesystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762059/HDFS-9131.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 06d1c90 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12655/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12655/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12655/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12655/console |


This message was automatically generated.

> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906204#comment-14906204
 ] 

Hadoop QA commented on HDFS-7529:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 21s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 17s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 18s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 26s | The applied patch generated  1 
new checkstyle issues (total was 354, now 354). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 37s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 10s | Tests failed in hadoop-hdfs. |
| | | 209m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762073/HDFS-7529-005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ead1b9e |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12658/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12658/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12658/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12658/console |


This message was automatically generated.

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8972) EINVAL Invalid argument when RAM_DISK usage 90%+

2015-09-24 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N reassigned HDFS-8972:
--

Assignee: Jagadesh Kiran N

> EINVAL Invalid argument when RAM_DISK usage 90%+
> 
>
> Key: HDFS-8972
> URL: https://issues.apache.org/jira/browse/HDFS-8972
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xu Chen
>Assignee: Jagadesh Kiran N
>Priority: Critical
>
> the directory  which is use LAZY_PERSIST policy , so use "df" command look up 
> tmpfs is usage >=90% , run spark,hive or mapreduce application , Datanode 
> come out  following exception 
> {code}
> 2015-08-26 17:37:34,123 WARN org.apache.hadoop.io.ReadaheadPool: Failed 
> readahead on null
> EINVAL: Invalid argument
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native 
> Method)
> at 
> org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
> at 
> org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
> at 
> org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And the application is slowly than without exception  25%
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9134) Move LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants to HdfsConstants

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905935#comment-14905935
 ] 

Haohui Mai commented on HDFS-9134:
--

{code}
+  @Deprecated
+  long LEASE_SOFTLIMIT_PERIOD = HdfsConstants.LEASE_SOFTLIMIT_PERIOD;
+  @Deprecated
+  long LEASE_HARDLIMIT_PERIOD = HdfsConstants.LEASE_HARDLIMIT_PERIOD;
{code}

These two constants are mainly used in server side. It's better to not 
deprecate them but just leave them as copies of the values from 
{{HdfsConstants}}.

+1 once addressed.

> Move LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants to 
> HdfsConstants
> ---
>
> Key: HDFS-9134
> URL: https://issues.apache.org/jira/browse/HDFS-9134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9134.000.patch
>
>
> As these two constants are used by {{DFSClient}} which is to be moved to 
> {{hadoop-hdfs-client}} module (see [HDFS-8053]), we consider these two 
> constants should be moved to {{hadoop-hdfs-client}} module. A good place is 
> {{HdfsConstants}} which contains for both server and client side constants.
> This jiras tracks the effort of moving {{LEASE\_SOFTLIMIT\_PERIOD}} and 
> {{LEASE_HARDLIMIT\_PERIOD}} constants from server side class 
> {{HdfsServerConstants}} to client side class {{HdfsConstants}}. We'd better 
> mark these keys as _@Deprecated_ for a while before totally deleting them, in 
> case we break dependent code without being aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905955#comment-14905955
 ] 

Hudson commented on HDFS-9131:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8511 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8511/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7529:
---
Attachment: HDFS-7529-005.patch

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8739) Move DFSClient to client implementation package

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905870#comment-14905870
 ] 

Hadoop QA commented on HDFS-8739:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 69 new or modified test files. |
| {color:green}+1{color} | javac |   8m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 10s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 55s | The applied patch generated  
123 new checkstyle issues (total was 637, now 648). |
| {color:red}-1{color} | whitespace |   0m 18s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 38s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 31s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 171m  1s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 14s | Tests failed in 
hadoop-hdfs-nfs. |
| | | 223m 43s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
| Failed build | hadoop-hdfs-nfs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762038/HDFS-8739-003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1f707ec |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-nfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12650/console |


This message was automatically generated.

> Move DFSClient to client implementation package
> ---
>
> Key: HDFS-8739
> URL: https://issues.apache.org/jira/browse/HDFS-8739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8739-002.patch, HDFS-8739-003.patch, HDFS-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9126) namenode crash in fsimage download/transfer

2015-09-24 Thread zengyongping (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zengyongping updated HDFS-9126:
---
Environment: 
OS:Centos 6.5(final)
Apache Hadoop:2.6.0
namenode ha base 5 journalnodes

  was:
OS:Centos 6.5(final)
Hadoop:2.6.0
namenode ha base 5 journalnodes


> namenode crash in fsimage download/transfer
> ---
>
> Key: HDFS-9126
> URL: https://issues.apache.org/jira/browse/HDFS-9126
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: OS:Centos 6.5(final)
> Apache Hadoop:2.6.0
> namenode ha base 5 journalnodes
>Reporter: zengyongping
>Priority: Critical
>
> In our product Hadoop cluster,when active namenode begin download/transfer 
> fsimage from standby namenode.some times zkfc monitor health of NameNode 
> socket timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING 
> ,happen hadoop namenode ha failover,fence old active namenode.
> zkfc logs:
> 2015-09-24 11:44:44,739 WARN org.apache.hadoop.ha.HealthMonitor: 
> Transport-level exception trying to monitor health of NameNode at 
> hostname1/192.168.10.11:8020: Call From hostname1/192.168.10.11 to 
> hostname1:8020 failed on socket timeout exception: 
> java.net.SocketTimeoutException: 45000 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/192.168.10.11:22614 remote=hostname1/192.168.10.11:8020]; For more 
> details see:  http://wiki.apache.org/hadoop/SocketTimeout
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.HealthMonitor: Entering 
> state SERVICE_NOT_RESPONDING
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: Local 
> service NameNode at hostname1/192.168.10.11:8020 entered state: 
> SERVICE_NOT_RESPONDING
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: 
> Quitting master election for NameNode at hostname1/192.168.10.11:8020 and 
> marking that fencing is necessary
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
> Yielding from election
> 2015-09-24 11:44:44,761 INFO org.apache.zookeeper.ZooKeeper: Session: 
> 0x54d81348fe503e3 closed
> 2015-09-24 11:44:44,761 WARN org.apache.hadoop.ha.ActiveStandbyElector: 
> Ignoring stale result from old client with sessionId 0x54d81348fe503e3
> 2015-09-24 11:44:44,764 INFO org.apache.zookeeper.ClientCnxn: EventThread 
> shut down
> namenode logs:
> 2015-09-24 11:43:34,074 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
> 192.168.10.12
> 2015-09-24 11:43:34,074 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
> 2015-09-24 11:43:34,075 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 
> 2317430129
> 2015-09-24 11:43:34,253 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 
> 272988 Total time for transactions(ms): 5502 Number of transactions batched 
> in Syncs: 146274 Number of syncs: 32375 SyncTimes(ms): 274465 319599
> 2015-09-24 11:43:46,005 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
> Rescanning after 3 milliseconds
> 2015-09-24 11:44:21,054 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> PendingReplicationMonitor timed out blk_1185804191_112164210
> 2015-09-24 11:44:36,076 INFO 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
> file 
> /software/data/hadoop-data/hdfs/namenode/current/edits_inprogress_02317430129
>  -> 
> /software/data/hadoop-data/hdfs/namenode/current/edits_02317430129-02317703116
> 2015-09-24 11:44:36,077 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 
> 2317703117
> 2015-09-24 11:45:38,008 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 1 
> Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 
> Number of syncs: 0 SyncTimes(ms): 0 61585
> 2015-09-24 11:45:38,009 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 222.88s 
> at 63510.29 KB/s
> 2015-09-24 11:45:38,009 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file 
> fsimage.ckpt_02317430128 size 14495092105 bytes.
> 2015-09-24 11:45:38,416 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal 
> 192.168.10.13:8485 failed to write txns 2317703117-2317703117. Will try to 
> write to this JN again after the next log roll.
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 44 is 
> less than the last promised epoch 45
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414)
> at 
> 

[jira] [Updated] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9131:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905944#comment-14905944
 ] 

Hudson commented on HDFS-9130:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8510 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8510/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogAtDebug.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTransferRbw.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 

[jira] [Commented] (HDFS-9128) TestWebHdfsFileContextMainOperations and TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on Windows.

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905965#comment-14905965
 ] 

Hudson commented on HDFS-9128:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #411 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/411/])
HDFS-9128. TestWebHdfsFileContextMainOperations and 
TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on Windows. 
Contributed by Chris Nauroth. (wheat9: rev 
06d1c9033effcd2b1ea54e87229d5478d85732ca)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestWebHdfsFileContextMainOperations.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on 
> Windows.
> 
>
> Key: HDFS-9128
> URL: https://issues.apache.org/jira/browse/HDFS-9128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9128.001.patch
>
>
> These tests do not override the default behavior of using the local file 
> system test working directory to construct test paths.  These paths will 
> contain the ':' character on Windows due to the drive spec.  HDFS rejects the 
> ':' character as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9132) Pass genstamp to ReplicaAccessorBuilder

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906011#comment-14906011
 ] 

Yi Liu commented on HDFS-9132:
--

Thanks Colin for the work.

One question, this new added {{ReplicaAccessor}} is target for Finalized 
Replica? If so, why need a genstamp?  If genstamp mismatched, then read failed?

> Pass genstamp to ReplicaAccessorBuilder
> ---
>
> Key: HDFS-9132
> URL: https://issues.apache.org/jira/browse/HDFS-9132
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9132.001.patch
>
>
> We should pass the desired genstamp of the block we want to read to 
> ExternalReplicaBuilder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8859) Improve DataNode ReplicaMap memory footprint to save about 45%

2015-09-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906077#comment-14906077
 ] 

Uma Maheswara Rao G commented on HDFS-8859:
---

Hi Yi, Thanks for the Nice work. I have put some time and reviewed the patch. 
Patch almost looks good.
Please fix the following test nit.
{code}
for (int i = 0; i < length; i++) {
+  while (keys.contains(k = random.nextLong()));
+  elements[i] = new TestElement(k, random.nextLong());
+}
{code}
You may want to add keys when you find new random. Otherwise no point of having 
while here.

> Improve DataNode ReplicaMap memory footprint to save about 45%
> --
>
> Key: HDFS-8859
> URL: https://issues.apache.org/jira/browse/HDFS-8859
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8859.001.patch, HDFS-8859.002.patch, 
> HDFS-8859.003.patch, HDFS-8859.004.patch
>
>
> By using following approach we can save about *45%* memory footprint for each 
> block replica in DataNode memory (This JIRA only talks about *ReplicaMap* in 
> DataNode), the details are:
> In ReplicaMap, 
> {code}
> private final Map> map =
> new HashMap>();
> {code}
> Currently we use a HashMap {{Map}} to store the replicas 
> in memory.  The key is block id of the block replica which is already 
> included in {{ReplicaInfo}}, so this memory can be saved.  Also HashMap Entry 
> has a object overhead.  We can implement a lightweight Set which is  similar 
> to {{LightWeightGSet}}, but not a fixed size ({{LightWeightGSet}} uses fix 
> size for the entries array, usually it's a big value, an example is 
> {{BlocksMap}}, this can avoid full gc since no need to resize),  also we 
> should be able to get Element through key.
> Following is comparison of memory footprint If we implement a lightweight set 
> as described:
> We can save:
> {noformat}
> SIZE (bytes)   ITEM
> 20The Key: Long (12 bytes object overhead + 8 
> bytes long)
> 12HashMap Entry object overhead
> 4  reference to the key in Entry
> 4  reference to the value in Entry
> 4  hash in Entry
> {noformat}
> Total:  -44 bytes
> We need to add:
> {noformat}
> SIZE (bytes)   ITEM
> 4 a reference to next element in ReplicaInfo
> {noformat}
> Total:  +4 bytes
> So totally we can save 40bytes for each block replica 
> And currently one finalized replica needs around 46 bytes (notice: we ignore 
> memory alignment here).
> We can save 1 - (4 + 46) / (44 + 46) = *45%*  memory for each block replica 
> in DataNode.
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906088#comment-14906088
 ] 

Hudson commented on HDFS-9130:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #412 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/412/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTransferRbw.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/test/TestFuseDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogAtDebug.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 

[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906087#comment-14906087
 ] 

Hudson commented on HDFS-9131:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #412 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/412/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9126) namenode crash in fsimage download/transfer

2015-09-24 Thread zengyongping (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zengyongping updated HDFS-9126:
---
Description: 
In our product Hadoop cluster,when active namenode begin download/transfer 
fsimage from standby namenode.some times zkfc monitor health of NameNode socket 
timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING ,happen hadoop 
namenode ha failover,fence old active namenode.

zkfc logs:
2015-09-24 11:44:44,739 WARN org.apache.hadoop.ha.HealthMonitor: 
Transport-level exception trying to monitor health of NameNode at 
hostname1/192.168.10.11:8020: Call From hostname1/192.168.10.11 to 
hostname1:8020 failed on socket timeout exception: 
java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/192.168.10.11:22614 remote=hostname1/192.168.10.11:8020]; For more 
details see:  http://wiki.apache.org/hadoop/SocketTimeout
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.HealthMonitor: Entering state 
SERVICE_NOT_RESPONDING
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: Local 
service NameNode at hostname1/192.168.10.11:8020 entered state: 
SERVICE_NOT_RESPONDING
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: 
Quitting master election for NameNode at hostname1/192.168.10.11:8020 and 
marking that fencing is necessary
2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Yielding from election
2015-09-24 11:44:44,761 INFO org.apache.zookeeper.ZooKeeper: Session: 
0x54d81348fe503e3 closed
2015-09-24 11:44:44,761 WARN org.apache.hadoop.ha.ActiveStandbyElector: 
Ignoring stale result from old client with sessionId 0x54d81348fe503e3
2015-09-24 11:44:44,764 INFO org.apache.zookeeper.ClientCnxn: EventThread shut 
down

namenode logs:
2015-09-24 11:43:34,074 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
192.168.10.12
2015-09-24 11:43:34,074 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Rolling edit logs
2015-09-24 11:43:34,075 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Ending log segment 2317430129
2015-09-24 11:43:34,253 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 272988 Total time for transactions(ms): 5502 Number of 
transactions batched in Syncs: 146274 Number of syncs: 32375 SyncTimes(ms): 
274465 319599
2015-09-24 11:43:46,005 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
Rescanning after 3 milliseconds
2015-09-24 11:44:21,054 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
PendingReplicationMonitor timed out blk_1185804191_112164210
2015-09-24 11:44:36,076 INFO 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
file 
/software/data/hadoop-data/hdfs/namenode/current/edits_inprogress_02317430129
 -> 
/software/data/hadoop-data/hdfs/namenode/current/edits_02317430129-02317703116
2015-09-24 11:44:36,077 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 2317703117
2015-09-24 11:45:38,008 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 1 Total time for transactions(ms): 0 Number of 
transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 61585
2015-09-24 11:45:38,009 INFO 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 222.88s 
at 63510.29 KB/s
2015-09-24 11:45:38,009 INFO 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file 
fsimage.ckpt_02317430128 size 14495092105 bytes.
2015-09-24 11:45:38,416 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal 
192.168.10.13:8485 failed to write txns 2317703117-2317703117. Will try to 
write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 44 is 
less than the last promised epoch 45
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:442)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:342)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:148)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at 

[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905924#comment-14905924
 ] 

Hadoop QA commented on HDFS-9040:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 38s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 11s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  0s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m 35s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 38s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  63m 26s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 20s | Tests failed in 
hadoop-hdfs-client. |
| | | 110m 34s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.hdfs.TestModTime |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped |
|   | hadoop.hdfs.server.namenode.TestEditLogAutoroll |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.protocolPB.TestPBHelper |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.TestSetrepDecreasing |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.TestHostsFiles |
|   | hadoop.hdfs.server.datanode.TestTransferRbw |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.TestClusterId |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.hdfs.server.namenode.TestLeaseManager |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing |
|   | hadoop.hdfs.server.datanode.TestStorageReport |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestMalformedURLs |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.server.namenode.TestRecoverStripedBlocks |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.TestWriteBlockGetsBlockLengthHint |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestFSInputChecker |
|   | 

[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905948#comment-14905948
 ] 

Hudson commented on HDFS-9130:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #432 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/432/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/test/TestFuseDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInFileContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTransferRbw.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestClientProtocolWithDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestSpaceReservation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogAtDebug.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 

[jira] [Updated] (HDFS-8739) Move DFSClient to client implementation package

2015-09-24 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8739:
-
Parent Issue: HDFS-6200  (was: HDFS-8048)

> Move DFSClient to client implementation package
> ---
>
> Key: HDFS-8739
> URL: https://issues.apache.org/jira/browse/HDFS-8739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8739-002.patch, HDFS-8739-003.patch, HDFS-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to set log4j or slf4j logger's level in unit tests

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905914#comment-14905914
 ] 

Haohui Mai commented on HDFS-9130:
--

+1. I'll commit it shortly.

> Use GenericTestUtils#setLogLevel to set log4j or slf4j logger's level in unit 
> tests
> ---
>
> Key: HDFS-9130
> URL: https://issues.apache.org/jira/browse/HDFS-9130
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9130.000.patch
>
>
> Currently we use both commons-logging and slf4j in {{hadoop-hdfs}}. To change 
> the logger level for dumping verbose debug information, there are many unit 
> tests that just cast the LOG object to a {{Log4JLogger}} and call 
> {{setLevel}} on that. e.g. in {{org.apache.hadoop.fs.TestFcHdfsSetUMask}}, 
> code sample as
> {code}
> ((Log4JLogger)FileSystem.LOG).getLogger().setLevel(Level.DEBUG);
> {code}
> One problem of this hard-coded approach is that we need to update the casting 
> code in test if we replace the log4j logger with slf4j. For example, as we're 
> creating a separate jar for hdfs-client (see [HDFS-6200]) which uses only 
> slf4j, we need to replace the log4j logger with slf4j logger, and to update 
> the casting for changing logger's level in unit tests as well.
> Instead, we can use the {{GenericTestUtils#setLogLevel}} (brought in 
> [HADOOP-11430]) method for both types of logger. This method internally 
> figures out the right thing to do based on the log / logger type. e.g.
> {code}
> GenericTestUtils.setLogLevel(FileSystem.LOG, Level.DEBUG);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9130:
-
Summary: Use GenericTestUtils#setLogLevel to the logging level  (was: Use 
GenericTestUtils#setLogLevel to set log4j or slf4j logger's level in unit tests)

> Use GenericTestUtils#setLogLevel to the logging level
> -
>
> Key: HDFS-9130
> URL: https://issues.apache.org/jira/browse/HDFS-9130
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9130.000.patch
>
>
> Currently we use both commons-logging and slf4j in {{hadoop-hdfs}}. To change 
> the logger level for dumping verbose debug information, there are many unit 
> tests that just cast the LOG object to a {{Log4JLogger}} and call 
> {{setLevel}} on that. e.g. in {{org.apache.hadoop.fs.TestFcHdfsSetUMask}}, 
> code sample as
> {code}
> ((Log4JLogger)FileSystem.LOG).getLogger().setLevel(Level.DEBUG);
> {code}
> One problem of this hard-coded approach is that we need to update the casting 
> code in test if we replace the log4j logger with slf4j. For example, as we're 
> creating a separate jar for hdfs-client (see [HDFS-6200]) which uses only 
> slf4j, we need to replace the log4j logger with slf4j logger, and to update 
> the casting for changing logger's level in unit tests as well.
> Instead, we can use the {{GenericTestUtils#setLogLevel}} (brought in 
> [HADOOP-11430]) method for both types of logger. This method internally 
> figures out the right thing to do based on the log / logger type. e.g.
> {code}
> GenericTestUtils.setLogLevel(FileSystem.LOG, Level.DEBUG);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8739) Move DFSClient to hadoop-hdfs-client

2015-09-24 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8739:
-
Summary: Move DFSClient to hadoop-hdfs-client  (was: Move DFSClient to 
client implementation package)

> Move DFSClient to hadoop-hdfs-client
> 
>
> Key: HDFS-8739
> URL: https://issues.apache.org/jira/browse/HDFS-8739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8739-002.patch, HDFS-8739-003.patch, HDFS-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8739) Move DFSClient to client implementation package

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905975#comment-14905975
 ] 

Yi Liu commented on HDFS-8739:
--

Yes, I think we can move it into {{hadoop-hdfs-client}} directly and put in the 
same package in {{hadoop-hdfs-client}} after most of other removing work done, 
and moving {{DistributedFileSystem}} depends on the {{DFSClient}}.   Then it 
will not break the compatibility. 

> Move DFSClient to client implementation package
> ---
>
> Key: HDFS-8739
> URL: https://issues.apache.org/jira/browse/HDFS-8739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8739-002.patch, HDFS-8739-003.patch, HDFS-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8739) Move DFSClient to client implementation package

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905937#comment-14905937
 ] 

Haohui Mai commented on HDFS-8739:
--

Please do not move the class at least in branch-2. Though it's a private class, 
many other downstream projects directly access {{DFSClient}}. Moving it to 
separate package will break them.

> Move DFSClient to client implementation package
> ---
>
> Key: HDFS-8739
> URL: https://issues.apache.org/jira/browse/HDFS-8739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8739-002.patch, HDFS-8739-003.patch, HDFS-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9131:
-
Summary: Move config keys used by hdfs-client to HdfsClientConfigKeys  
(was: Move hadoop-hdfs-client related config keys from DFSConfigKeys to 
HdfsClientConfigKeys)

> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905998#comment-14905998
 ] 

Hadoop QA commented on HDFS-9076:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 16s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 24s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 170m  9s | Tests failed in hadoop-hdfs. |
| | | 216m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
| Timed out tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762050/HDFS-9076.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 06d1c90 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12652/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12652/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12652/console |


This message was automatically generated.

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9133) ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906000#comment-14906000
 ] 

Yi Liu commented on HDFS-9133:
--

Thanks Colin for the work. 

*1.*
This comment is for {{ExternalBlockReader}}, not only for the patch:
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block length? Since from the calculation of {{skip}} and {{available}} in 
{{ExternalBlockReader}} , it indicates it assumes it's the block length, or at 
least assume {{startOffset}} is 0.

*2.*
I think if reach the end of block, we should also update the {{pos}}, otherwise 
the calculation in {{available()}} is wrong.

> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF
> -
>
> Key: HDFS-9133
> URL: https://issues.apache.org/jira/browse/HDFS-9133
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9133.001.patch
>
>
> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at 
> EOF, as per the JavaDoc in BlockReader.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8298) HA: NameNode should not shut down completely without quorum, doesn't recover from temporary network outages

2015-09-24 Thread Sandeep Nemuri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906044#comment-14906044
 ] 

Sandeep Nemuri commented on HDFS-8298:
--

Hi,

We are also facing the same issue in CDH4.3 every week.

Thanks
Sandeep

> HA: NameNode should not shut down completely without quorum, doesn't recover 
> from temporary network outages
> ---
>
> Key: HDFS-8298
> URL: https://issues.apache.org/jira/browse/HDFS-8298
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, HDFS, namenode, qjm
>Affects Versions: 2.6.0
> Environment: HDP 2.2
>Reporter: Hari Sekhon
>
> In an HDFS HA setup if there is a temporary problem with contacting journal 
> nodes (eg. network interruption), the NameNode shuts down entirely, when it 
> should instead go in to a standby mode so that it can stay online and retry 
> to achieve quorum later.
> If both NameNodes shut themselves off like this then even after the temporary 
> network outage is resolved, the entire cluster remains offline indefinitely 
> until operator intervention, whereas it could have self-repaired after 
> re-contacting the journalnodes and re-achieving quorum.
> {code}2015-04-15 15:59:26,900 FATAL namenode.FSEditLog 
> (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for 
> required journal (JournalAndStre
> am(mgr=QJM to [:8485, :8485, :8485], stream=QuorumOutputStream 
> starting at txid 54270281))
> java.io.IOException: Interrupted waiting 2ms for a quorum of nodes to 
> respond.
> at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:134)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:388)
> at java.lang.Thread.run(Thread.java:745)
> 2015-04-15 15:59:26,901 WARN  client.QuorumJournalManager 
> (QuorumOutputStream.java:abort(72)) - Aborting QuorumOutputStream starting at 
> txid 54270281
> 2015-04-15 15:59:26,904 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status 1
> 2015-04-15 15:59:27,001 INFO  namenode.NameNode (StringUtils.java:run(659)) - 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at /
> /{code}
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906042#comment-14906042
 ] 

Hudson commented on HDFS-9131:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #440 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/440/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9121) Remove unnecessary "+" sysmbol from BlockManager log.

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906083#comment-14906083
 ] 

Hadoop QA commented on HDFS-9121:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  1s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 19s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 37s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 28s | Tests failed in hadoop-hdfs. |
| | | 207m 38s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
| Timed out tests | org.apache.hadoop.hdfs.server.mover.TestStorageMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762061/HDFS-9121.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 06d1c90 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12656/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12656/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12656/console |


This message was automatically generated.

> Remove unnecessary "+" sysmbol from BlockManager log.
> -
>
> Key: HDFS-9121
> URL: https://issues.apache.org/jira/browse/HDFS-9121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Ranga Swamy
>Assignee: Ranga Swamy
>Priority: Minor
> Attachments: HDFS-9121.01.patch, HDFS-9121.patch
>
>
> Remove unnecessary "+" sysmbol from BlockManager log.
> {code}
> 2015-08-18 15:34:14,016 | INFO | IPC Server handler 12 on 25000 | BLOCK* 
> processOverReplicatedBlock: Postponing processing of over-replicated 
> blk_1075396202_1655682 since storage + 
> [DISK]DS-41c1b969-a3f9-48ff-8c76-6fea0152950c:NORMAL:160.149.0.113:25009datanode
>  160.149.0.113:25009 does not yet have up-to-date block information. | 
> BlockManager.java:2906
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9126) namenode crash in fsimage download/transfer

2015-09-24 Thread zengyongping (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905891#comment-14905891
 ] 

zengyongping commented on HDFS-9126:


hi,I attach some logs,can you help me fix the probrom? thank you.

> namenode crash in fsimage download/transfer
> ---
>
> Key: HDFS-9126
> URL: https://issues.apache.org/jira/browse/HDFS-9126
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: OS:Centos 6.5(final)
> Hadoop:2.6.0
> namenode ha base 5 journalnodes
>Reporter: zengyongping
>Priority: Critical
>
> In our product Hadoop cluster,when active namenode begin download/transfer 
> fsimage from standby namenode.some times zkfc monitor health of NameNode 
> socket timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING 
> ,happen hadoop namenode ha failover,fence old active namenode.
> zkfc logs:
> 2015-09-24 11:44:44,739 WARN org.apache.hadoop.ha.HealthMonitor: 
> Transport-level exception trying to monitor health of NameNode at 
> hostname1/192.168.10.11:8020: Call From hostname1/192.168.10.11 to 
> hostname1:8020 failed on socket timeout exception: 
> java.net.SocketTimeoutException: 45000 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/192.168.10.11:22614 remote=hostname1/192.168.10.11:8020]; For more 
> details see:  http://wiki.apache.org/hadoop/SocketTimeout
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.HealthMonitor: Entering 
> state SERVICE_NOT_RESPONDING
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: Local 
> service NameNode at hostname1/192.168.10.11:8020 entered state: 
> SERVICE_NOT_RESPONDING
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: 
> Quitting master election for NameNode at hostname1/192.168.10.11:8020 and 
> marking that fencing is necessary
> 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
> Yielding from election
> 2015-09-24 11:44:44,761 INFO org.apache.zookeeper.ZooKeeper: Session: 
> 0x54d81348fe503e3 closed
> 2015-09-24 11:44:44,761 WARN org.apache.hadoop.ha.ActiveStandbyElector: 
> Ignoring stale result from old client with sessionId 0x54d81348fe503e3
> 2015-09-24 11:44:44,764 INFO org.apache.zookeeper.ClientCnxn: EventThread 
> shut down
> namenode logs:
> 2015-09-24 11:43:34,074 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
> 192.168.10.12
> 2015-09-24 11:43:34,074 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
> 2015-09-24 11:43:34,075 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 
> 2317430129
> 2015-09-24 11:43:34,253 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 
> 272988 Total time for transactions(ms): 5502 Number of transactions batched 
> in Syncs: 146274 Number of syncs: 32375 SyncTimes(ms): 274465 319599
> 2015-09-24 11:43:46,005 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
> Rescanning after 3 milliseconds
> 2015-09-24 11:44:21,054 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> PendingReplicationMonitor timed out blk_1185804191_112164210
> 2015-09-24 11:44:36,076 INFO 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
> file 
> /software/data/hadoop-data/hdfs/namenode/current/edits_inprogress_02317430129
>  -> 
> /software/data/hadoop-data/hdfs/namenode/current/edits_02317430129-02317703116
> 2015-09-24 11:44:36,077 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 
> 2317703117
> 2015-09-24 11:45:38,008 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 1 
> Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 
> Number of syncs: 0 SyncTimes(ms): 0 61585
> 2015-09-24 11:45:38,009 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 222.88s 
> at 63510.29 KB/s
> 2015-09-24 11:45:38,009 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file 
> fsimage.ckpt_02317430128 size 14495092105 bytes.
> 2015-09-24 11:45:38,416 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal 
> 192.168.10.13:8485 failed to write txns 2317703117-2317703117. Will try to 
> write to this JN again after the next log roll.
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 44 is 
> less than the last promised epoch 45
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:442)
> 

[jira] [Commented] (HDFS-9128) TestWebHdfsFileContextMainOperations and TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on Windows.

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905900#comment-14905900
 ] 

Hudson commented on HDFS-9128:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2349 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2349/])
HDFS-9128. TestWebHdfsFileContextMainOperations and 
TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on Windows. 
Contributed by Chris Nauroth. (wheat9: rev 
06d1c9033effcd2b1ea54e87229d5478d85732ca)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestWebHdfsFileContextMainOperations.java


> TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations fail due to invalid HDFS path on 
> Windows.
> 
>
> Key: HDFS-9128
> URL: https://issues.apache.org/jira/browse/HDFS-9128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9128.001.patch
>
>
> These tests do not override the default behavior of using the local file 
> system test working directory to construct test paths.  These paths will 
> contain the ':' character on Windows due to the drive spec.  HDFS rejects the 
> ':' character as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-09-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9079:

Attachment: HDFS-9079-HDFS-7285.00.patch

[~walter.k.su] I named the patch wrongly. It is based on HDFS-7285 branch. As 
mentioned above it contains part of HDFS-9040. The current patch is just to 
illustrate the overall idea. More details will be added.

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9079-HDFS-7285.00.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped 
> block group ({{FSN#createNewBlock}}). For each new striped block group we can 
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can 
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we 
> shouldn't try to further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9130:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Use GenericTestUtils#setLogLevel to the logging level
> -
>
> Key: HDFS-9130
> URL: https://issues.apache.org/jira/browse/HDFS-9130
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9130.000.patch
>
>
> Currently we use both commons-logging and slf4j in {{hadoop-hdfs}}. To change 
> the logger level for dumping verbose debug information, there are many unit 
> tests that just cast the LOG object to a {{Log4JLogger}} and call 
> {{setLevel}} on that. e.g. in {{org.apache.hadoop.fs.TestFcHdfsSetUMask}}, 
> code sample as
> {code}
> ((Log4JLogger)FileSystem.LOG).getLogger().setLevel(Level.DEBUG);
> {code}
> One problem of this hard-coded approach is that we need to update the casting 
> code in test if we replace the log4j logger with slf4j. For example, as we're 
> creating a separate jar for hdfs-client (see [HDFS-6200]) which uses only 
> slf4j, we need to replace the log4j logger with slf4j logger, and to update 
> the casting for changing logger's level in unit tests as well.
> Instead, we can use the {{GenericTestUtils#setLogLevel}} (brought in 
> [HADOOP-11430]) method for both types of logger. This method internally 
> figures out the right thing to do based on the log / logger type. e.g.
> {code}
> GenericTestUtils.setLogLevel(FileSystem.LOG, Level.DEBUG);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906020#comment-14906020
 ] 

Yi Liu commented on HDFS-9137:
--

Thanks Uma, agree with that it's potential risk that can cause deadlock.  
It's dangerous to have lock in {{toString()}} implementation, this function is 
called sometimes but you may not notice.

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906043#comment-14906043
 ] 

Hudson commented on HDFS-9130:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #440 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/440/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/test/TestFuseDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogRace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java
* 

[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906274#comment-14906274
 ] 

Vinayakumar B commented on HDFS-9076:
-

test failures are unrelated.
As this is log message change, no need of test.

+1.

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7284) Add more debug info to BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas

2015-09-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-7284:
--
Status: Patch Available  (was: Open)

> Add more debug info to 
> BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas
> -
>
> Key: HDFS-7284
> URL: https://issues.apache.org/jira/browse/HDFS-7284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.5.1
>Reporter: Hu Liu,
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-7284.001.patch
>
>
> When I was looking at some replica loss issue, I got the following info from 
> log
> {code}
> 2014-10-13 01:54:53,104 INFO BlockStateChange: BLOCK* Removing stale replica 
> from location x.x.x.x
> {code}
> I could just know that a replica is removed, but I don't know which block and 
> its timestamp. I need to know the id and timestamp of the block from the log 
> file.
> So it's better to add more info including block id and timestamp to the code 
> snippet
> {code}
> for (ReplicaUnderConstruction r : replicas) {
>   if (genStamp != r.getGenerationStamp()) {
> r.getExpectedLocation().removeBlock(this);
> NameNode.blockStateChangeLog.info("BLOCK* Removing stale replica "
> + "from location: " + r.getExpectedLocation());
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906347#comment-14906347
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #434 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/434/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906280#comment-14906280
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8512 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8512/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9076:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~surendrasingh]

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_

2015-09-24 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906296#comment-14906296
 ] 

J.Andreina commented on HDFS-8673:
--

[~airbots], , Thanks for the patch.

The patch looks good to me , but i feel the current behavior would be broken.

*Before patch :* FileAlreadyExistsException will be thrown , only if 
"Directory" with the "File1._COPYING_" exist. But not for a file with the same 
name.
*After patch:* Exception will be thrown , for both file and directory.

*This is a kind of behavior change for end-users.*

*For example:*
Say while user is writing a 10GB file ( "File1" ) and if the write operation is 
interrupted and if "File1._COPYING_" file is retained in Filesystem,
then user might re-try to write the same "File1".
Write will success , as we overwrite the "File1._COPYING_"

*But after patch:*
  User re-try to write "File1" will fail with exception that the 
"File1._COPYING_" already exist. 

[~ste...@apache.org], can you provide your feedback on this / correct me if iam 
wrong.

> HDFS reports file already exists if there is a file/dir name end with 
> ._COPYING_
> 
>
> Key: HDFS-8673
> URL: https://issues.apache.org/jira/browse/HDFS-8673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HDFS-8673.000-WIP.patch, HDFS-8673.000.patch, 
> HDFS-8673.001.patch, HDFS-8673.002.patch, HDFS-8673.003.patch, 
> HDFS-8673.003.patch
>
>
> Because CLI is using CommandWithDestination.java which add "._COPYING_" to 
> the tail of file name when it does the copy. It will cause problem if there 
> is a file/dir already called *._COPYING_ on HDFS.
> For file:
> -bash-4.1$ hadoop fs -put 5M /user/occ/
> -bash-4.1$ hadoop fs -mv /user/occ/5M /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> -rw-r--r--   1 occ supergroup5242880 2015-06-26 05:16 
> /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -put 128K /user/occ/5M
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> -rw-r--r--   1 occ supergroup 131072 2015-06-26 05:19 /user/occ/5M
> For dir:
> -bash-4.1$ hadoop fs -mkdir /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> drwxr-xr-x   - occ supergroup  0 2015-06-26 05:24 
> /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -put 128K /user/occ/5M
> put: /user/occ/5M._COPYING_ already exists as a directory
> -bash-4.1$ hadoop fs -ls /user/occ/
> (/user/occ/5M._COPYING_ is gone)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906327#comment-14906327
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2379 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2379/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9133) ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906000#comment-14906000
 ] 

Yi Liu edited comment on HDFS-9133 at 9/24/15 1:19 PM:
---

Thanks Colin for the work. 

*1.*
This comment is for {{ExternalBlockReader}}, not only for the patch:
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block length? Since from the calculation of {{skip}} and {{available}} in 
{{ExternalBlockReader}} , it indicates it assumes it's the block length, or at 
least assume {{startOffset}} is 0.


was (Author: hitliuyi):
Thanks Colin for the work. 

*1.*
This comment is for {{ExternalBlockReader}}, not only for the patch:
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block length? Since from the calculation of {{skip}} and {{available}} in 
{{ExternalBlockReader}} , it indicates it assumes it's the block length, or at 
least assume {{startOffset}} is 0.

*2.*
I think if reach the end of block, we should also update the {{pos}}, otherwise 
the calculation in {{available()}} is wrong.

> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF
> -
>
> Key: HDFS-9133
> URL: https://issues.apache.org/jira/browse/HDFS-9133
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9133.001.patch
>
>
> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at 
> EOF, as per the JavaDoc in BlockReader.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9064) NN old UI (block_info_xml) not available in 2.7.x

2015-09-24 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906334#comment-14906334
 ] 

Kanaka Kumar Avvaru commented on HDFS-9064:
---

I would like to correct my previous analysis with following observations.

1) I have observed {{block_info_xml.jsp}} is accessible with out SPNEGO 
authentication in 2.6.0 even in Secure mode ON. 

2) Alternatively URL like {{nnip:port/fsck?blockId=blk_}} (SPNEGO 
authentication is required) also available inplace of {{block_info_xml.jsp}} 
from remote clients though some  information like owner & permission details 
are not available in the response (alternatively one can use webhdfs REST API 
for such details).

I think it is security breach proving {{block_info_xml}} and so it was removed. 
Let's get confirmation from implementors of HDFS-6252 here.

[~wheat9], Can you give your opinion on this JIRA and also shall we add 
GETBLOCKINFO in webhdfs for all the details provided by {{block_info_xml.jsp}} ?

> NN old UI (block_info_xml) not available in 2.7.x
> -
>
> Key: HDFS-9064
> URL: https://issues.apache.org/jira/browse/HDFS-9064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
>
> In 2.6.x hadoop deploys, given a blockId it was very easy to find out the 
> file name and the locations of replicas (also whether they are corrupt or 
> not).
> This was the REST call:
> {noformat}
>  http://:/block_info_xml.jsp?blockId=xxx
> {noformat}
> But this was removed by HDFS-6252 in 2.7 builds.
> Creating this jira to restore that functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9133) ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906000#comment-14906000
 ] 

Yi Liu edited comment on HDFS-9133 at 9/24/15 1:52 PM:
---

Thanks Colin for the work.  The patch itself looks good, +1. 

I have two small comments not related to the patch, but for 
{{ExternalBlockReader}}:

*1.*
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block replica length? Since from the calculation of {{skip}} and {{available}} 
in {{ExternalBlockReader}} , it indicates it assumes it's the block replica 
length.

For example, if client wants to read 5 bytes starting from pos 100,  then in 
{{ExternalBlockReader}}, the block replica is only 5 bytes, it's not correct.

*2.*
{code}
/**
   * Set the length of the replica which is visible to this client.  If bytes
   * are added later, they will not be visible to the ReplicaAccessor we are
   * building.  In order to see more of the replica, the client must re-open
   * this HDFS file.  The visible length provides an upper bound, but not a
   * lower one.  If the replica is deleted or truncated, fewer bytes may be
   * visible than specified here.
   */
  public abstract ReplicaAccessorBuilder setVisibleLength(long visibleLength);
{code}

Here it says the visible length is the upper bound, if so, how can we use it to 
calculate the {{available()}}.


was (Author: hitliuyi):
Thanks Colin for the work. 

*1.*
This comment is for {{ExternalBlockReader}}:
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block replica length? Since from the calculation of {{skip}} and {{available}} 
in {{ExternalBlockReader}} , it indicates it assumes it's the block replica 
length.

For example, if client wants to read 5 bytes starting from pos 100,  then in 
{{ExternalBlockReader}}, the block replica is only 5 bytes, it's not correct.

*2.*
{code}
/**
   * Set the length of the replica which is visible to this client.  If bytes
   * are added later, they will not be visible to the ReplicaAccessor we are
   * building.  In order to see more of the replica, the client must re-open
   * this HDFS file.  The visible length provides an upper bound, but not a
   * lower one.  If the replica is deleted or truncated, fewer bytes may be
   * visible than specified here.
   */
  public abstract ReplicaAccessorBuilder setVisibleLength(long visibleLength);
{code}

Here it says the visible length is the upper bound, if so, how can we use it to 
calculate the {{available()}}.

> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF
> -
>
> Key: HDFS-9133
> URL: https://issues.apache.org/jira/browse/HDFS-9133
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9133.001.patch
>
>
> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at 
> EOF, as per the JavaDoc in BlockReader.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9130) Use GenericTestUtils#setLogLevel to the logging level

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906299#comment-14906299
 ] 

Hudson commented on HDFS-9130:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2351 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/])
HDFS-9130. Use GenericTestUtils#setLogLevel to the logging level. Contributed 
by Mingliang Liu. (wheat9: rev 4893adff19065cd6094dee97862cdca699b131af)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTransferRbw.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestClientProtocolWithDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogRace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogAtDebug.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BenchmarkThroughput.java
* 

[jira] [Commented] (HDFS-9131) Move config keys used by hdfs-client to HdfsClientConfigKeys

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906298#comment-14906298
 ] 

Hudson commented on HDFS-9131:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2351 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/])
HDFS-9131. Move config keys used by hdfs-client to HdfsClientConfigKeys. 
Contributed by Mingliang Liu. (wheat9: rev 
ead1b9e680201e8ad789b55c09b3c993cbf4827e)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestLossyRetryInvocationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLocalDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java


> Move config keys used by hdfs-client to HdfsClientConfigKeys
> 
>
> Key: HDFS-9131
> URL: https://issues.apache.org/jira/browse/HDFS-9131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9131.000.patch, HDFS-9131.001.patch
>
>
> As we move {{DFSClient}} (see [HDFS-8053]) and {{DistributedFileSystem}} (see 
> [HDFS-8740]) from {{hadoop-hdfs}} to {{hadoop-hdfs-client}}, we need to move 
> the client side config keys from {{DFSConfigKeys}} to 
> {{HdfsClientConfigKeys}} in advance.
> This jiras tracks the effort of moving all client related config keys, which 
> are used by {{DFSClient}} and {{DistributedFileSystem}}, from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.
> We'd better mark these keys in {{DFSConfigKeys}} as _@Deprecated_ for a while 
> before totally deleting them, in case we break dependent code without being 
> aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906302#comment-14906302
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #441 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/441/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9133) ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF

2015-09-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906000#comment-14906000
 ] 

Yi Liu edited comment on HDFS-9133 at 9/24/15 1:45 PM:
---

Thanks Colin for the work. 

*1.*
This comment is for {{ExternalBlockReader}}:
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block replica length? Since from the calculation of {{skip}} and {{available}} 
in {{ExternalBlockReader}} , it indicates it assumes it's the block replica 
length.

For example, if client wants to read 5 bytes starting from pos 100,  then in 
{{ExternalBlockReader}}, the block replica is only 5 bytes, it's not correct.

*2.*
{code}
/**
   * Set the length of the replica which is visible to this client.  If bytes
   * are added later, they will not be visible to the ReplicaAccessor we are
   * building.  In order to see more of the replica, the client must re-open
   * this HDFS file.  The visible length provides an upper bound, but not a
   * lower one.  If the replica is deleted or truncated, fewer bytes may be
   * visible than specified here.
   */
  public abstract ReplicaAccessorBuilder setVisibleLength(long visibleLength);
{code}

Here it says the visible length is the upper bound, if so, how can we use it to 
calculate the {{available()}}.


was (Author: hitliuyi):
Thanks Colin for the work. 

*1.*
This comment is for {{ExternalBlockReader}}, not only for the patch:
There is some mismatched between {{ExternalBlockReader}}  and other BlockReaders

{code}
ExternalBlockReader(ReplicaAccessor accessor, long visibleLength,
  long startOffset) {
this.accessor = accessor;
this.visibleLength = visibleLength;
this.pos = startOffset;
  }
{code}

{code}
return new ExternalBlockReader(accessor, length, startOffset);
{code}

{code}
/**
   * Number of bytes to read.  -1 indicates no limit.
   */
  private long length = -1;

/**
   * The offset within the block to start reading at.
   */
  private long startOffset;
{code}

For other block readers, the {{length}} means the number of bytes to read,  but 
in {{ExternalBlockReader}}, it means "visible length" which assumes it's the 
block length? Since from the calculation of {{skip}} and {{available}} in 
{{ExternalBlockReader}} , it indicates it assumes it's the block length, or at 
least assume {{startOffset}} is 0.

> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at EOF
> -
>
> Key: HDFS-9133
> URL: https://issues.apache.org/jira/browse/HDFS-9133
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9133.001.patch
>
>
> ExternalBlockReader and ReplicaAccessor need to return -1 on read when at 
> EOF, as per the JavaDoc in BlockReader.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-09-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906384#comment-14906384
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7858:
---

Never mind.  Thanks for the response.

> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.10.patch, HDFS-7858.11.patch, HDFS-7858.12.patch, 
> HDFS-7858.13.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, 
> HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, 
> HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
> which is the active Namenode.
> 2) Subsequent calls, will invoke the previously successful NN.
> 3) On failover of the currently active NN, the remaining NNs will be invoked 
> to decide which is the new active 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9120) Metric logging values are truncated in NN Metrics log.

2015-09-24 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906419#comment-14906419
 ] 

Kanaka Kumar Avvaru commented on HDFS-9120:
---

Yes fine [~arpitagarwal]. Let user configure comma separated prefixes like 
{{excludekeystartwith = NameNodeInfo:LiveNodes, IPCLoggerChannel}} then 
NameNodeInfo:LiveNodes and all the metrics like IPCLoggerChannel-XXX will be 
ignored. 

Also shall we consider flatten and log values from TabularData & CompositeData 
types which are ignored currently?

> Metric logging values are truncated in NN Metrics log.
> --
>
> Key: HDFS-9120
> URL: https://issues.apache.org/jira/browse/HDFS-9120
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Reporter: Archana T
>Assignee: Kanaka Kumar Avvaru
>
> In namenode-metrics.log when metric name value pair is more than 128 
> characters, it is truncated as below --
> Example for LiveNodes information is ---
> vi namenode-metrics.log
> {color:red}
> 2015-09-22 10:34:37,891 
> NameNodeInfo:LiveNodes={"host-10-xx-xxx-88:50076":{"infoAddr":"10.xx.xxx.88:0","infoSecureAddr":"10.xx.xxx.88:52100","xferaddr":"10.xx.xxx.88:50076","l...
> {color}
> Here complete information of metric value is not logged.
> etc information being displayed as "..."
> Silimarly for other metric values in NN metrics.
> where as DN metric logs complete metric values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906400#comment-14906400
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1174 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1174/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9064) NN old UI (block_info_xml) not available in 2.7.x

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906526#comment-14906526
 ] 

Haohui Mai commented on HDFS-9064:
--

There are two reasons that the jsp is removed:

(1) Yes, there's security issues in block_info_xml.jsp.
(2) The output itself is problematic. The information, particularly, the 
filename of the block, is inaccurate due to snapshots and truncate.

> NN old UI (block_info_xml) not available in 2.7.x
> -
>
> Key: HDFS-9064
> URL: https://issues.apache.org/jira/browse/HDFS-9064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
>
> In 2.6.x hadoop deploys, given a blockId it was very easy to find out the 
> file name and the locations of replicas (also whether they are corrupt or 
> not).
> This was the REST call:
> {noformat}
>  http://:/block_info_xml.jsp?blockId=xxx
> {noformat}
> But this was removed by HDFS-6252 in 2.7 builds.
> Creating this jira to restore that functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906524#comment-14906524
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #413 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/413/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7529:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~rakeshr] for the 
contribution.

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-8873:
---
Attachment: HDFS-8873.007.patch

OK, here's a version reworked to use the StopWatch class.  After a little 
research, I think we're safe to trust that Thread.sleep() won't have spurious 
wake-ups, which makes the StopWatch code simpler than the modulo version.  If 
we want to care about spurious wake-ups, then the modulo code is simpler.

This patch is still failing one of the new tests I just added.  I will have to 
fix that, but I wanted to get this patch posted for review by [~nroberts] this 
morning.  I'll post a fixed patch when I get a chance.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906555#comment-14906555
 ] 

Hudson commented on HDFS-7529:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #442 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/442/])
HDFS-7529. Consolidate encryption zone related implementation into a single 
class. Contributed by Rakesh R. (wheat9: rev 
71a81b6257c475ad62eb69292a20d45d269c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906503#comment-14906503
 ] 

Haohui Mai commented on HDFS-7529:
--

+1. I'll commit it shortly.

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906569#comment-14906569
 ] 

Hudson commented on HDFS-7529:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8514 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8514/])
HDFS-7529. Consolidate encryption zone related implementation into a single 
class. Contributed by Rakesh R. (wheat9: rev 
71a81b6257c475ad62eb69292a20d45d269c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906602#comment-14906602
 ] 

Rakesh R commented on HDFS-7529:


Thanks a lot [~wheat9] for the detailed reviews and committing the patch!

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906615#comment-14906615
 ] 

Daniel Templeton commented on HDFS-8873:


bq. Shouldn't the isInterrupted() check throw an InterruptedException?

In patch 8 it does. :)

bq. nit but I find markRunning() and markWaiting() confusing

I think it will be confusing either way.  Maybe rename them logTimeRunning() 
and logTimeWaiting()?

bq. I'm kind of wondering if we should disallow extremely low duty cycles.

I'm kinda caveat emptor on that one.  In no case can they shutdown the scanning 
completely, but if they want to make it take forever, that's their business.  I 
also say that because at this point we don't know what reasonable lower bounds 
are.  Maybe after the patch goes in, you can play with it on your system and 
tell us what impact various throttle values levels are?  We could then follow 
up with a JIRA to update the docs and maybe bounds checking accordingly.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8855) Webhdfs client leaks active NameNode connections

2015-09-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-8855:

Attachment: (was: HDFS-8855.005.patch)

> Webhdfs client leaks active NameNode connections
> 
>
> Key: HDFS-8855
> URL: https://issues.apache.org/jira/browse/HDFS-8855
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Bob Hansen
>Assignee: Xiaobing Zhou
> Attachments: HDFS-8855.1.patch, HDFS-8855.2.patch, HDFS-8855.3.patch, 
> HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and 
> performing random reads.  Note that there are at most 50 concurrent reads, 
> and all webhdfs sessions are kept open.  Each read is ~64k at a random 
> position.  
> The script periodically (once per second) shells into the NameNode and 
> produces a summary of the socket states.  For my test cluster with 5 nodes, 
> it took ~30 seconds for the NameNode to have ~25000 active connections and 
> fails.
> It appears that each request to the webhdfs client is opening a new 
> connection to the NameNode and keeping it open after the request is complete. 
>  If the process continues to run, eventually (~30-60 seconds), all of the 
> open connections are closed and the NameNode recovers.  
> This smells like SoftReference reaping.  Are we using SoftReferences in the 
> webhdfs client to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-8873:
---
Attachment: HDFS-8873.009.patch

I renamed the markXing() methods to accumulateTimeXing().  I also fixed some 
javadoc issues I missed last time.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906567#comment-14906567
 ] 

Hudson commented on HDFS-9076:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2352 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2352/])
HDFS-9076. Log full path instead of inodeId in 
DFSClient#closeAllFilesBeingWritten() (Contributed by Surendra Singh Lilhore) 
(vinayakumarb: rev e52bc697f8f9c255dfc4d01b71272931153721c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9076.01.patch, HDFS-9076.02.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7284) Add more debug info to BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906576#comment-14906576
 ] 

Hadoop QA commented on HDFS-7284:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 47s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  7s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 137m 12s | Tests failed in hadoop-hdfs. |
| | | 179m 34s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762053/HDFS-7284.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e52bc69 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12659/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12659/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12659/console |


This message was automatically generated.

> Add more debug info to 
> BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas
> -
>
> Key: HDFS-7284
> URL: https://issues.apache.org/jira/browse/HDFS-7284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.5.1
>Reporter: Hu Liu,
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-7284.001.patch
>
>
> When I was looking at some replica loss issue, I got the following info from 
> log
> {code}
> 2014-10-13 01:54:53,104 INFO BlockStateChange: BLOCK* Removing stale replica 
> from location x.x.x.x
> {code}
> I could just know that a replica is removed, but I don't know which block and 
> its timestamp. I need to know the id and timestamp of the block from the log 
> file.
> So it's better to add more info including block id and timestamp to the code 
> snippet
> {code}
> for (ReplicaUnderConstruction r : replicas) {
>   if (genStamp != r.getGenerationStamp()) {
> r.getExpectedLocation().removeBlock(this);
> NameNode.blockStateChangeLog.info("BLOCK* Removing stale replica "
> + "from location: " + r.getExpectedLocation());
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-8873:
---
Attachment: HDFS-8873.008.patch

All the tests pass now.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8696) Reduce the variances of latency of WebHDFS

2015-09-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906597#comment-14906597
 ] 

Haohui Mai commented on HDFS-8696:
--

{code}
+
+  this.httpServer.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
+  conf.getInt(DFSConfigKeys.DFS_WEBHDFS_NETTY_CHANNEL_LOW_WATERMARK,
+  DFSConfigKeys.DFS_WEBHDFS_NETTY_CHANNEL_LOW_WATERMARK_DEFAULT));
+  this.httpServer.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
+  conf.getInt(DFSConfigKeys.DFS_WEBHDFS_NETTY_CHANNEL_HIGH_WATERMARK,
+  DFSConfigKeys.DFS_WEBHDFS_NETTY_CHANNEL_HIGH_WATERMARK_DEFAULT));
+
   if (externalHttpChannel == null) {
{code}

I assume that there are copy and paste errors here.

> Reduce the variances of latency of WebHDFS
> --
>
> Key: HDFS-8696
> URL: https://issues.apache.org/jira/browse/HDFS-8696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-8696.004.patch, HDFS-8696.005.patch, 
> HDFS-8696.006.patch, HDFS-8696.007.patch, HDFS-8696.1.patch, 
> HDFS-8696.2.patch, HDFS-8696.3.patch
>
>
> There is an issue that appears related to the webhdfs server. When making two 
> concurrent requests, the DN will sometimes pause for extended periods (I've 
> seen 1-300 seconds), killing performance and dropping connections. 
> To reproduce: 
> 1. set up a HDFS cluster
> 2. Upload a large file (I was using 10GB). Perform 1-byte reads, writing
> the time out to /tmp/times.txt
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e -o /tmp/times.txt -a curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root=1";
> done
> {noformat}
> 3. Watch for 1-byte requests that take more than one second:
> tail -F /tmp/times.txt | grep -E "^[^0]"
> 4. After it has had a chance to warm up, start doing large transfers from
> another shell:
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root";
> done
> {noformat}
> It's easy to find after a minute or two that small reads will sometimes
> pause for 1-300 seconds. In some extreme cases, it appears that the
> transfers timeout and the DN drops the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906603#comment-14906603
 ] 

Nathan Roberts commented on HDFS-8873:
--

Thanks [~templedf]. I like that the stopwatch class makes this much cleaner. 
Just a couple of comments:
- Shouldn't the isInterrupted() check throw an InterruptedException? Otherwise 
won't we just break out of one level? It would probably be good to test 
shutdown on an actual cluster if possible because you're exactly right that we 
could be in here a long time and it would be good to make sure we don't affect 
shutdown of the datanode. This has been a problem in the past and can have a 
serious impact on rolling upgrades.
- nit but I find markRunning() and markWaiting() confusing (seem backwards to 
me because we call markRunning() just before going to sleep).
- I'm kind of wondering if we should disallow extremely low duty cycles. Seems 
like it could take close to 24 hours with a minimum setting. A minimum of 20% 
should keep us within an hour.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906630#comment-14906630
 ] 

Hudson commented on HDFS-7529:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #436 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/436/])
HDFS-7529. Consolidate encryption zone related implementation into a single 
class. Contributed by Rakesh R. (wheat9: rev 
71a81b6257c475ad62eb69292a20d45d269c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8696) Reduce the variances of latency of WebHDFS

2015-09-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-8696:

Attachment: HDFS-8696.008.patch

> Reduce the variances of latency of WebHDFS
> --
>
> Key: HDFS-8696
> URL: https://issues.apache.org/jira/browse/HDFS-8696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-8696.004.patch, HDFS-8696.005.patch, 
> HDFS-8696.006.patch, HDFS-8696.007.patch, HDFS-8696.008.patch, 
> HDFS-8696.1.patch, HDFS-8696.2.patch, HDFS-8696.3.patch
>
>
> There is an issue that appears related to the webhdfs server. When making two 
> concurrent requests, the DN will sometimes pause for extended periods (I've 
> seen 1-300 seconds), killing performance and dropping connections. 
> To reproduce: 
> 1. set up a HDFS cluster
> 2. Upload a large file (I was using 10GB). Perform 1-byte reads, writing
> the time out to /tmp/times.txt
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e -o /tmp/times.txt -a curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root=1";
> done
> {noformat}
> 3. Watch for 1-byte requests that take more than one second:
> tail -F /tmp/times.txt | grep -E "^[^0]"
> 4. After it has had a chance to warm up, start doing large transfers from
> another shell:
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root";
> done
> {noformat}
> It's easy to find after a minute or two that small reads will sometimes
> pause for 1-300 seconds. In some extreme cases, it appears that the
> transfers timeout and the DN drops the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7767) Use the noredirect flag in WebHDFS to allow web browsers to upload files via the NN UI

2015-09-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7767:
---
Attachment: HDFS-7767.03.patch

This patch contains only the webapp changes needed. Also fixed multiple file 
uploads.

> Use the noredirect flag in WebHDFS to allow web browsers to upload files via 
> the NN UI
> --
>
> Key: HDFS-7767
> URL: https://issues.apache.org/jira/browse/HDFS-7767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7767.01.patch, HDFS-7767.02.patch, 
> HDFS-7767.03.patch
>
>
> This subtask would use the functionality provided in HDFS-7766 to allow files 
> to be uploaded to HDFS via a Web-browser. (These include the changes to the 
> HTML5 and javascript code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7767) Use the noredirect flag in WebHDFS to allow web browsers to upload files via the NN UI

2015-09-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7767:
---
Status: Patch Available  (was: Open)

> Use the noredirect flag in WebHDFS to allow web browsers to upload files via 
> the NN UI
> --
>
> Key: HDFS-7767
> URL: https://issues.apache.org/jira/browse/HDFS-7767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7767.01.patch, HDFS-7767.02.patch, 
> HDFS-7767.03.patch
>
>
> This subtask would use the functionality provided in HDFS-7766 to allow files 
> to be uploaded to HDFS via a Web-browser. (These include the changes to the 
> HTML5 and javascript code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9138) TestDatanodeStartupFixesLegacyStorageIDs fails on Windows due to failure to unpack old image tarball that contains hard links

2015-09-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906861#comment-14906861
 ] 

Chris Nauroth commented on HDFS-9138:
-

Similar to HDFS-4732, this can be fixed by avoiding hard links in the 
checked-in tarball and instead using copies of the files.

> TestDatanodeStartupFixesLegacyStorageIDs fails on Windows due to failure to 
> unpack old image tarball that contains hard links
> -
>
> Key: HDFS-9138
> URL: https://issues.apache.org/jira/browse/HDFS-9138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> {{TestDatanodeStartupFixesLegacyStorageIDs#testUpgradeFrom22via26FixesStorageIDs}}
>  uses a checked-in DataNode data directory that contains hard links.  The 
> hard links cannot be handled correctly by the commons-compress library used 
> in the Windows implementation of {{FileUtil#unTar}}.  The result is that the 
> unpacked block files have 0 length, the block files reported to the NameNode 
> are invalid, and therefore the mini-cluster never gets enough good blocks 
> reported to leave safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9134) Move LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants to HdfsConstants

2015-09-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906715#comment-14906715
 ] 

Mingliang Liu commented on HDFS-9134:
-

Thank you [~wheat9] for your comment. I had a look at the code and yes, those 
two constants are mainly used in server side (aka namenode). The v1 patch 
removes the {{@Deprecated}} annotation and adds one line comment.

> Move LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants to 
> HdfsConstants
> ---
>
> Key: HDFS-9134
> URL: https://issues.apache.org/jira/browse/HDFS-9134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9134.000.patch, HDFS-9134.001.patch
>
>
> As these two constants are used by {{DFSClient}} which is to be moved to 
> {{hadoop-hdfs-client}} module (see [HDFS-8053]), we consider these two 
> constants should be moved to {{hadoop-hdfs-client}} module. A good place is 
> {{HdfsConstants}} which contains for both server and client side constants.
> This jiras tracks the effort of moving {{LEASE\_SOFTLIMIT\_PERIOD}} and 
> {{LEASE_HARDLIMIT\_PERIOD}} constants from server side class 
> {{HdfsServerConstants}} to client side class {{HdfsConstants}}. We'd better 
> mark these keys as _@Deprecated_ for a while before totally deleting them, in 
> case we break dependent code without being aware of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906743#comment-14906743
 ] 

Hudson commented on HDFS-7529:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1175 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1175/])
HDFS-7529. Consolidate encryption zone related implementation into a single 
class. Contributed by Rakesh R. (wheat9: rev 
71a81b6257c475ad62eb69292a20d45d269c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906844#comment-14906844
 ] 

Hudson commented on HDFS-7529:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #414 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/414/])
HDFS-7529. Consolidate encryption zone related implementation into a single 
class. Contributed by Rakesh R. (wheat9: rev 
71a81b6257c475ad62eb69292a20d45d269c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9112) Haadmin fails if multiple name service IDs are configured

2015-09-24 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9112:
---
Attachment: HDFS-9112.003.patch

> Haadmin fails if multiple name service IDs are configured
> -
>
> Key: HDFS-9112
> URL: https://issues.apache.org/jira/browse/HDFS-9112
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9112.001.patch, HDFS-9112.002.patch, 
> HDFS-9112.003.patch
>
>
> In HDFS-6376 we supported a feature for distcp that allows multiple 
> NameService IDs to be specified so that we can copy from two HA enabled 
> clusters.
> That confuses haadmin command since we have a check in 
> DFSUtil#getNamenodeServiceAddr which fails if it finds more than 1 name in 
> that property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-24 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-9139:
---

 Summary: Enable parallel JUnit tests for HDFS Pre-commit 
 Key: HDFS-9139
 URL: https://issues.apache.org/jira/browse/HDFS-9139
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Forked from HADOOP-11984, 
With the initial and significant work from [~cnauroth], this Jira is to track 
and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9138) TestDatanodeStartupFixesLegacyStorageIDs fails on Windows due to failure to unpack old image tarball that contains hard links

2015-09-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9138:

Status: Open  (was: Patch Available)

Err... wait... I completely forgot about the work I had done in HDFS-8554.  :-) 
 Let me cancel this patch and rethink for a moment.

> TestDatanodeStartupFixesLegacyStorageIDs fails on Windows due to failure to 
> unpack old image tarball that contains hard links
> -
>
> Key: HDFS-9138
> URL: https://issues.apache.org/jira/browse/HDFS-9138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HDFS-9138.001.patch
>
>
> {{TestDatanodeStartupFixesLegacyStorageIDs#testUpgradeFrom22via26FixesStorageIDs}}
>  uses a checked-in DataNode data directory that contains hard links.  The 
> hard links cannot be handled correctly by the commons-compress library used 
> in the Windows implementation of {{FileUtil#unTar}}.  The result is that the 
> unpacked block files have 0 length, the block files reported to the NameNode 
> are invalid, and therefore the mini-cluster never gets enough good blocks 
> reported to leave safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906719#comment-14906719
 ] 

Hudson commented on HDFS-7529:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2353 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2353/])
HDFS-7529. Consolidate encryption zone related implementation into a single 
class. Contributed by Rakesh R. (wheat9: rev 
71a81b6257c475ad62eb69292a20d45d269c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9064) NN old UI (block_info_xml) not available in 2.7.x

2015-09-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906748#comment-14906748
 ] 

Ravi Prakash commented on HDFS-9064:


Whenever we do this, should we extend 
https://issues.apache.org/jira/browse/HDFS-8678 ?

> NN old UI (block_info_xml) not available in 2.7.x
> -
>
> Key: HDFS-9064
> URL: https://issues.apache.org/jira/browse/HDFS-9064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
>
> In 2.6.x hadoop deploys, given a blockId it was very easy to find out the 
> file name and the locations of replicas (also whether they are corrupt or 
> not).
> This was the REST call:
> {noformat}
>  http://:/block_info_xml.jsp?blockId=xxx
> {noformat}
> But this was removed by HDFS-6252 in 2.7 builds.
> Creating this jira to restore that functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906782#comment-14906782
 ] 

Hadoop QA commented on HDFS-8873:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 54s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  1s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 12s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 25s | The applied patch generated  8 
new checkstyle issues (total was 439, now 440). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 38s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  83m 54s | Tests failed in hadoop-hdfs. |
| | | 129m 56s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Timed out tests | org.apache.hadoop.hdfs.TestFileCreation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762164/HDFS-8873.007.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 71a81b6 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12660/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12660/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12660/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12660/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12660/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12660/console |


This message was automatically generated.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906811#comment-14906811
 ] 

Hadoop QA commented on HDFS-8873:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 27s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 25s | The applied patch generated  8 
new checkstyle issues (total was 439, now 439). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 36s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 46s | Tests failed in hadoop-hdfs. |
| | |  95m 33s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.namenode.TestINodeFile |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestParallelImageWrite |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.server.namenode.TestStorageRestore |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSetTimes |
|   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.namenode.TestSecureNameNode |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestFsShellPermission |
|   | hadoop.hdfs.server.namenode.TestMalformedURLs |
|   | hadoop.hdfs.TestBalancerBandwidth |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.fs.TestSWebHdfsFileContextMainOperations |
|   | hadoop.hdfs.TestDisableConnCache |
|   | hadoop.hdfs.TestIsMethodSupported |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
|
|   | hadoop.hdfs.TestFileCreationClient |
|   | 

[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-24 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906919#comment-14906919
 ] 

Nathan Roberts commented on HDFS-8873:
--

Thanks [~templedf] for the update! I'm +1 (non-binding) for v9 of the patch.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9138) TestDatanodeStartupFixesLegacyStorageIDs fails on Windows due to failure to unpack old image tarball that contains hard links

2015-09-24 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-9138:
---

 Summary: TestDatanodeStartupFixesLegacyStorageIDs fails on Windows 
due to failure to unpack old image tarball that contains hard links
 Key: HDFS-9138
 URL: https://issues.apache.org/jira/browse/HDFS-9138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


{{TestDatanodeStartupFixesLegacyStorageIDs#testUpgradeFrom22via26FixesStorageIDs}}
 uses a checked-in DataNode data directory that contains hard links.  The hard 
links cannot be handled correctly by the commons-compress library used in the 
Windows implementation of {{FileUtil#unTar}}.  The result is that the unpacked 
block files have 0 length, the block files reported to the NameNode are 
invalid, and therefore the mini-cluster never gets enough good blocks reported 
to leave safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >