[jira] [Commented] (HDFS-11296) Maintenance state expiry should be an epoch time and not jvm monotonic

2017-01-05 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803903#comment-15803903
 ] 

Lei (Eddy) Xu commented on HDFS-11296:
--

Hi, [~manojg]

Thanks for working on this. I have a few questions regarding to this patch.

There were a few efforts to replace wall time with {{monotonicNow()}} in the 
past, for example, HDFS-6841 and HDFS-6453.  In general, different nodes can 
not guarantee time are synchronized, i.e., between clients, NameNode and 
DataNodes. 

Btw, IIUC, the expiration time set by client is the delta time instead of 
absolute time?

> Maintenance state expiry should be an epoch time and not jvm monotonic
> --
>
> Key: HDFS-11296
> URL: https://issues.apache.org/jira/browse/HDFS-11296
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11296.01.patch
>
>
> Currently it is possible to configure an expiry time in milliseconds for a 
> DataNode in maintenance state. As per the design, the expiry attribute is an 
> absolute time, beyond which NameNode starts to stop the ongoing maintenance 
> operation for that DataNode. Internally in the code, this expiry time is read 
> and checked against {{Time.monotonicNow()}} making the expiry based on more 
> of JVM's runtime, which is very difficult to configure for any external user. 
> The goal is to make the expiry time an absolute epoch time, so that its easy 
> to configure for external users.
> {noformat}
> {
> "hostName": ,
> "port": ,
> "adminState": "IN_MAINTENANCE",
> "maintenanceExpireTimeInMS": 
> }
> {noformat}
> DatanodeInfo.java
> {noformat}
>   public static boolean maintenanceNotExpired(long maintenanceExpireTimeInMS) 
> {
> return Time.monotonicNow() < maintenanceExpireTimeInMS;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11297) hadoop-7285-power

2017-01-05 Thread xlsong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xlsong updated HDFS-11297:
--
Attachment: instruction.doc

> hadoop-7285-power
> -
>
> Key: HDFS-11297
> URL: https://issues.apache.org/jira/browse/HDFS-11297
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: erasure-coding
>Affects Versions: HDFS-7285
> Environment: power
>Reporter: xlsong
> Fix For: HDFS-7285
>
> Attachments: instruction.doc
>
>
> hadoop-7285-power



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9478) Reason for failing ipc.FairCallQueue contruction should be thrown

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9478:
-
Fix Version/s: 2.8.0

> Reason for failing ipc.FairCallQueue contruction should be thrown
> -
>
> Key: HDFS-9478
> URL: https://issues.apache.org/jira/browse/HDFS-9478
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Ajith S
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9478.2.patch, HDFS-9478.3.patch, HDFS-9478.patch
>
>
> When FairCallQueue Construction fails, NN fails to start throwing 
> RunTimeException without throwing any reason on why it fails.
> 2015-11-30 17:45:26,661 INFO org.apache.hadoop.ipc.FairCallQueue: 
> FairCallQueue is in use with 4 queues.
> 2015-11-30 17:45:26,665 DEBUG org.apache.hadoop.metrics2.util.MBeans: 
> Registered Hadoop:service=ipc.65110,name=DecayRpcScheduler
> 2015-11-30 17:45:26,666 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.lang.RuntimeException: org.apache.hadoop.ipc.FairCallQueue could not be 
> constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createCallQueueInstance(CallQueueManager.java:96)
> at org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:55)
> at org.apache.hadoop.ipc.Server.(Server.java:2241)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:942)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:784)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:346)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:687)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:889)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:872)
> Example: reason for above failure could have been --
> 1. the weights were not equal to the number of queues configured.
> 2. decay-scheduler.thresholds not in sync with number of queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9772) TestBlockReplacement#testThrottler doesn't work as expected

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9772:
-
Fix Version/s: 2.8.0

> TestBlockReplacement#testThrottler doesn't work as expected
> ---
>
> Key: HDFS-9772
> URL: https://issues.apache.org/jira/browse/HDFS-9772
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: test
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS.001.patch
>
>
> In {{TestBlockReplacement#testThrottler}}, it use a fault variable to 
> calculate the ended bandwidth. It use variable {{totalBytes}} rathe than 
> final variable {{TOTAL_BYTES}}. And the value of {{TOTAL_BYTES}} is set to 
> {{bytesToSend}}. The {{totalBytes}} looks no meaning here and this will make 
> {{totalBytes*1000/(end-start)}} always be 0 and the comparison always true. 
> The method code is below:
> {code}
> @Test
>   public void testThrottler() throws IOException {
> Configuration conf = new HdfsConfiguration();
> FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
> long bandwidthPerSec = 1024*1024L;
> final long TOTAL_BYTES =6*bandwidthPerSec; 
> long bytesToSend = TOTAL_BYTES; 
> long start = Time.monotonicNow();
> DataTransferThrottler throttler = new 
> DataTransferThrottler(bandwidthPerSec);
> long totalBytes = 0L;
> long bytesSent = 1024*512L; // 0.5MB
> throttler.throttle(bytesSent);
> bytesToSend -= bytesSent;
> bytesSent = 1024*768L; // 0.75MB
> throttler.throttle(bytesSent);
> bytesToSend -= bytesSent;
> try {
>   Thread.sleep(1000);
> } catch (InterruptedException ignored) {}
> throttler.throttle(bytesToSend);
> long end = Time.monotonicNow();
> assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9661) Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9661:
-
Fix Version/s: 2.8.0

> Deadlock in DN.FsDatasetImpl between moveBlockAcrossStorage and createRbw
> -
>
> Key: HDFS-9661
> URL: https://issues.apache.org/jira/browse/HDFS-9661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0, 2.8.0, 2.7.1, 2.7.2
>Reporter: ade
>Assignee: ade
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9661.0.patch, HDFS-9661.001.patch, 
> hdfs-9661-jstack.jpg.png
>
>
> We found a deadlock in dn.FsDatasetImpl between moveBlockAcrossStorage and 
> createRbw from rpc call: replaceBlock/writeBlock. The dn's jstack result is
> !hdfs-9661-jstack.jpg.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9654) Code refactoring for HDFS-8578

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9654:
-
Fix Version/s: 2.8.0

> Code refactoring for HDFS-8578
> --
>
> Key: HDFS-9654
> URL: https://issues.apache.org/jira/browse/HDFS-9654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: h9654_20160116.patch
>
>
> This is a code refactoring JIRA in order to change Datanode to process all 
> storage/data dirs in parallel; see also HDFS-8578.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9434) Recommission a datanode with 500k blocks may pause NN for 30 seconds

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9434:
-
Fix Version/s: 2.8.0

> Recommission a datanode with 500k blocks may pause NN for 30 seconds
> 
>
> Key: HDFS-9434
> URL: https://issues.apache.org/jira/browse/HDFS-9434
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.2, 2.6.3, 3.0.0-alpha1
>
> Attachments: h9434_20151116.patch, h9434_20151116_branch-2.6.patch
>
>
> In BlockManager, processOverReplicatedBlocksOnReCommission is called within 
> the namespace lock.  There is a (not very useful) log message printed in 
> processOverReplicatedBlock.  When there is a large number of blocks stored in 
> a storage, printing the log message for each block can pause NN to process 
> any other operations.  We did see that it could pause NN  for 30 seconds for 
> a storage with 500k blocks.
> I suggest to change the log message to trace level as a quick fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6481:
-
Fix Version/s: 2.8.0

> DatanodeManager#getDatanodeStorageInfos() should check the length of 
> storageIDs
> ---
>
> Key: HDFS-6481
> URL: https://issues.apache.org/jira/browse/HDFS-6481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: h6481_20151105.patch, hdfs-6481-v1.txt
>
>
> Ian Brooks reported the following stack trace:
> {code}
> 2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
> /user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
>  block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
> hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> at org.apache.hadoop.ipc.Client.call(Client.java:1347)
> at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
> syncer encountered error, will retry. txid=211
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   

[jira] [Updated] (HDFS-8950) NameNode refresh doesn't remove DataNodes that are no longer in the allowed list

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8950:
-
Fix Version/s: 2.8.0

> NameNode refresh doesn't remove DataNodes that are no longer in the allowed 
> list
> 
>
> Key: HDFS-8950
> URL: https://issues.apache.org/jira/browse/HDFS-8950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 2.6.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: 2.7.2-candidate
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8950.001.patch, HDFS-8950.002.patch, 
> HDFS-8950.003.patch, HDFS-8950.004.patch, HDFS-8950.005.patch, 
> HDFS-8950.branch-2.7.patch
>
>
> If you remove a DN from NN's allowed host list (HDFS was HA) and then do NN 
> refresh, it doesn't remove it actually and the NN UI keeps showing that node. 
> It may try to allocate some blocks to that DN as well during an MR job.  This 
> issue is independent from DN decommission.
> To reproduce:
> 1. Add a DN to dfs_hosts_allow
> 2. Refresh NN
> 3. Start DN. Now NN starts seeing DN.
> 4. Stop DN
> 5. Remove DN from dfs_hosts_allow
> 6. Refresh NN -> NN is still reporting DN as being used by HDFS.
> This is different from decom because there DN is added to exclude list in 
> addition to being removed from allowed list, and in that case everything 
> works correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8867) Enable optimized block reports

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8867:
-
Fix Version/s: 2.8.0

> Enable optimized block reports
> --
>
> Key: HDFS-8867
> URL: https://issues.apache.org/jira/browse/HDFS-8867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8867.patch
>
>
> Opening this ticket on behalf of [~daryn]
> HDFS-7435 introduced a more efficiently encoded block report format designed 
> to improve performance and reduce GC load on the NN and DNs. The NN is not 
> advertising this capability to the DNs so old-style reports are still being 
> used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8879) Quota by storage type usage incorrectly initialized upon namenode restart

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8879:
-
Fix Version/s: 2.8.0

> Quota by storage type usage incorrectly initialized upon namenode restart
> -
>
> Key: HDFS-8879
> URL: https://issues.apache.org/jira/browse/HDFS-8879
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Kihwal Lee
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8879.01.patch
>
>
> This was found by [~kihwal] as part of HDFS-8865 work in this 
> [comment|https://issues.apache.org/jira/browse/HDFS-8865?focusedCommentId=14660904=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14660904].
> The unit test 
> testQuotaByStorageTypePersistenceInFsImage/testQuotaByStorageTypePersistenceInFsEdit
>  failed to detect this because they were using an obsolete
> FsDirectory instance. Once added the highlighted line below, the issue can be 
> reproed.
> {code}
> >fsdir = cluster.getNamesystem().getFSDirectory();
> INode testDirNodeAfterNNRestart = fsdir.getINode4Write(testDir.toString());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-01-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803859#comment-15803859
 ] 

Xiao Chen commented on HDFS-10899:
--

During the offline discussion, [~dilaver] also asked about the progress 
restoring logic - the {{startAfter}} usage. It turns out in my conversion from 
{{getListing}} to inode based traversal, this wasn't treated correctly.

Fixed it in patch 4, and added a unit test 
{{TestEncryptionZones#testRestartDuringReencrypt}} for that. Hopefully review 
can be a tad easier by diff-ing patch 3 v.s. patch 4.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-01-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.04.patch

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9669) TcpPeerServer should respect ipc.server.listen.queue.size

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9669:
-
Fix Version/s: 2.8.0

> TcpPeerServer should respect ipc.server.listen.queue.size
> -
>
> Key: HDFS-9669
> URL: https://issues.apache.org/jira/browse/HDFS-9669
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HDFS-9669.0.patch, HDFS-9669.1.patch, HDFS-9669.1.patch
>
>
> On periods of high traffic we are seeing:
> {code}
> 16/01/19 23:40:40 WARN hdfs.DFSClient: Connection failure: Failed to connect 
> to /10.138.178.47:50010 for file /MYPATH/MYFILE for block 
> BP-1935559084-10.138.112.27-1449689748174:blk_1080898601_7375294:java.io.IOException:
>  Connection reset by peer
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>   at 
> org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>   at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
>   at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
>   at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:109)
>   at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
> {code}
> At the time that this happens there are way less xceivers than configured.
> On most JDK's this will make 50 the total backlog at any time. This 
> effectively means that any GC + Busy time willl result in tcp resets.
> http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/tip/src/share/classes/java/net/ServerSocket.java#l370



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9305) Delayed heartbeat processing causes storm of subsequent heartbeats

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9305:
-
Fix Version/s: 2.8.0

> Delayed heartbeat processing causes storm of subsequent heartbeats
> --
>
> Key: HDFS-9305
> URL: https://issues.apache.org/jira/browse/HDFS-9305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-9305.01.patch, HDFS-9305.02.patch
>
>
> A DataNode typically sends a heartbeat to the NameNode every 3 seconds.  We 
> expect heartbeat handling to complete relatively quickly.  However, if 
> something unexpected causes heartbeat processing to get blocked, such as a 
> long GC or heavy lock contention within the NameNode, then heartbeat 
> processing would be delayed.  After recovering from this delay, the DataNode 
> then starts sending a storm of heartbeat messages in a tight loop.  In a 
> large cluster with many DataNodes, this storm of heartbeat messages could 
> cause harmful load on the NameNode and make overall cluster recovery more 
> difficult.
> The bug appears to be caused by incorrect timekeeping inside 
> {{BPServiceActor}}.  The next heartbeat time is always calculated as a delta 
> from the previous heartbeat time, without any compensation for possible long 
> latency on an individual heartbeat RPC.  The only mitigation would be 
> restarting all DataNodes to force a reset of the heartbeat schedule, or 
> simply wait out the storm until the scheduling catches up and corrects itself.
> This problem would not manifest after a NameNode restart.  In that case, the 
> NameNode would respond to the first heartbeat by telling the DataNode to 
> re-register, and {{BPServiceActor#reRegister}} would reset the heartbeat 
> schedule to the current time.  I believe the problem would only manifest if 
> the NameNode process kept alive, but processed heartbeats unexpectedly slowly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11297) hadoop-7285-power

2017-01-05 Thread xlsong (JIRA)
xlsong created HDFS-11297:
-

 Summary: hadoop-7285-power
 Key: HDFS-11297
 URL: https://issues.apache.org/jira/browse/HDFS-11297
 Project: Hadoop HDFS
  Issue Type: Task
  Components: erasure-coding
Affects Versions: HDFS-7285
 Environment: power
Reporter: xlsong
 Fix For: HDFS-7285


hadoop-7285-power



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7164) Feature documentation for HDFS-6581

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7164:
-
Fix Version/s: 2.8.0

> Feature documentation for HDFS-6581
> ---
>
> Key: HDFS-7164
> URL: https://issues.apache.org/jira/browse/HDFS-7164
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.7.0, HDFS-6581
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-7164.01.patch, HDFS-7164.02.patch, 
> HDFS-7164.03.patch, HDFS-7164.04.patch, HDFS-7164.05.patch, 
> HDFS-7164.06.patch, HDFS-7164.07.patch, LazyPersistWrites.png, site.tar.bz2
>
>
> Add feature documentation explaining use cases, how to configure RAM_DISK and 
> API updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9221) HdfsServerConstants#ReplicaState#getState should avoid calling values() since it creates a temporary array

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9221:
-
Fix Version/s: 2.8.0

> HdfsServerConstants#ReplicaState#getState should avoid calling values() since 
> it creates a temporary array
> --
>
> Key: HDFS-9221
> URL: https://issues.apache.org/jira/browse/HDFS-9221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HADOOP-9221.001.patch
>
>
> When the BufferDecoder in BlockListAsLongs converts the stored value to a 
> ReplicaState enum it calls ReplicaState.getState(int) unfortunately this 
> method creates a ReplicaState[] for each call since it calls 
> ReplicaState.values().
> This patch creates a cached version of the values and thus avoid all 
> allocation when doing the conversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-01-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.03.patch

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt 
> edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-01-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803850#comment-15803850
 ] 

Xiao Chen commented on HDFS-10899:
--

[~dilaver] had a pretty good offline review of patch 2. Attaching #3 to address 
the following comments:
- {{ReencryptionZonesStatus.reencryptRequests}} declared type should probably 
be {{List}} instead of {{Collection}} considering "it should preserve the 
order".
- Should {{ReencryptionZonesStatus}} set lastFileProcessed to null upon 
{{removeZone}} when the removed zone is the current zone? Instead of the 
callers invoking both removeZone() and setLastFileProcessed() in tandem?
- Rename {{flipPauseForTesting}} to separate methods ({{pauseForTesting}} and 
{{resumeFromTestPause}})?
- {{Make ReencryptionHandler#pauseForTesting()}} synchronized instead of 
synchronized block in the method? That way the second log statement will be 
guaranteed to be executed together with the preceding statements.
- Add a max retry and terminate re-encrypt thread if keyprovider is still null.
- misplaced log statement "{}({}) is a nested EZ, skipping for re-encrypt"
- {{INodeDirectory.nextChild()}} will return 0 if {{startAfter}} has length 0 
so there doesn't seem to be a need for the {{if}}.
- While re-encryption is single threaded (at least for now), could it be more 
appropriate to create a ThreadFactory for a given instance of 
EncrytpionZoneManager instead of creating a new one for every invocation of 
{{EncryptionZoneManager#startReencryptThread()}}, especially considering the 
logged names of threads will overlap (if/when there are multiple threads)?
- EncryptionZoneManager#removeEncryptionZone() make the logging clear and 
unconditional
- Missing documentation for {{EncryptionZoneManager#reencryptEncryptionZone, 
#cancelReencryptEncryptionZone, #isEncryptionZoneRoot, 
#getIdRootEncryptionZone}}.
- In {{EncryptionZoneManager#reencryptEncryptionZone()}}, unnecessary break in 
String constant (in throw).
- {{EncryptionZoneManager#loadReencryptStatus()}}: why no null check for 
{{zoneId}} in the {{else}}? Note that {{LinkedHashSet}} doc says it allows null 
elements. Move the {{zoneId}} null check before the {{if}}?
- In FSNamesystem, use {{this.dir.ezManager}} instead of {{dir.ezManager}} to 
match the surrounding style for {{setProvider}}?

More coming...

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8522:
-
Fix Version/s: 2.8.0

> Change heavily recorded NN logs from INFO to DEBUG level
> 
>
> Key: HDFS-8522
> URL: https://issues.apache.org/jira/browse/HDFS-8522
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
> HDFS-8522.02.patch, HDFS-8522.03.patch, HDFS-8522.branch-2.00.patch, 
> HDFS-8522.branch-2.01.patch, HDFS-8522.branch-2.7.00.patch
>
>
> More specifically, the default namenode log settings have its log flooded 
> with the following entries. This JIRA is opened to change them from INFO to 
> DEBUG level.
> {code} 
> FSNamesystem.java:listCorruptFileBlocks 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8523) Remove usage information on unsupported operation "fsck -showprogress" from branch-2

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8523:
-
Fix Version/s: 2.8.0

> Remove usage information on unsupported operation "fsck -showprogress" from 
> branch-2
> 
>
> Key: HDFS-8523
> URL: https://issues.apache.org/jira/browse/HDFS-8523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: J.Andreina
>Assignee: J.Andreina
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8523.1-branch-2.7.0.patch, 
> HDFS-8523.2-branch-2.patch
>
>
> Option to disable fsck dots is been implemented and fixed only in trunk 
> ,since it is an incompatible change(HDFS-2538)
> But in Hadoop-2.7 branch documentation has information about unsupported 
> operation "-showprogress" .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8583) Document that NFS gateway does not work with rpcbind on SLES 11

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8583:
-
Fix Version/s: 2.8.0

> Document that NFS gateway does not work with rpcbind on SLES 11
> ---
>
> Key: HDFS-8583
> URL: https://issues.apache.org/jira/browse/HDFS-8583
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-12069.01.patch
>
>
> The NFS gateway does not work with the system rpcbind service on SLES 11. It 
> does work with the hadoop portmap. We'll add a short note to the NFS 
> documentation about it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8566) HDFS documentation about debug commands wrongly identifies them as "hdfs dfs" commands

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8566:
-
Fix Version/s: 2.8.0

> HDFS documentation about debug commands wrongly identifies them as "hdfs dfs" 
> commands
> --
>
> Key: HDFS-8566
> URL: https://issues.apache.org/jira/browse/HDFS-8566
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8566.patch
>
>
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#recoverLease
> {code}
> Usage: hdfs dfs recoverLease [-path ] [-retries ]
> {code}
> *Expected:*
> {code}
> Usage: hdfs debug recoverLease [-path ] [-retries ]
> {code}
> same for {{verify}} command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8576) Lease recovery should return true if the lease can be released and the file can be closed

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8576:
-
Fix Version/s: 2.8.0

>  Lease recovery should return true if the lease can be released and the file 
> can be closed
> --
>
> Key: HDFS-8576
> URL: https://issues.apache.org/jira/browse/HDFS-8576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: J.Andreina
>Assignee: J.Andreina
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8576.1.patch, HDFS-8576.2.patch
>
>
> FSNamesystem#recoverLease , returns false eventhough lease recover happens. 
> Hence only on second retry for recovering lease on a file ,returns success 
> after checking if the file is not underconstruction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8595) TestCommitBlockSynchronization fails in branch-2.7

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8595:
-
Fix Version/s: 2.8.0

> TestCommitBlockSynchronization fails in branch-2.7
> --
>
> Key: HDFS-8595
> URL: https://issues.apache.org/jira/browse/HDFS-8595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8595.01.patch
>
>
> Mock-based TestCommitBlockSynchronization fails in branch-2.7 with NPE due to 
> a log statement dereferencing a null object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11259) Update fsck to display maintenance state info

2017-01-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-11259:

Hadoop Flags: Incompatible change

> Update fsck to display maintenance state info
> -
>
> Key: HDFS-11259
> URL: https://issues.apache.org/jira/browse/HDFS-11259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11259.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11259) Update fsck to display maintenance state info

2017-01-05 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803767#comment-15803767
 ] 

Lei (Eddy) Xu commented on HDFS-11259:
--

Hi, [~manojg] 

Thanks for working on it. LGTM.  +1. 

I will wait a day for further comments. 

> Update fsck to display maintenance state info
> -
>
> Key: HDFS-11259
> URL: https://issues.apache.org/jira/browse/HDFS-11259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11259.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803720#comment-15803720
 ] 

Hadoop QA commented on HDFS-10860:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
17s{color} | {color:green} The patch generated 0 new + 564 unchanged - 8 fixed 
= 564 total (was 572) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
57s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 31s{color} | 

[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-01-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803671#comment-15803671
 ] 

Weiwei Yang commented on HDFS-6874:
---

The checkstyle warning was not introduced by this patch. Hi [~clamb], would you 
please help to review the v5 patch ? This patch adds GETFILEBLOCKLOCATIONS 
operation to HttpFS, it complies with WebHDFS and File System API. I also 
reused some common code to parse JSON strings from JsonUtil class in order to 
reduce the code changes.

Thanks

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.02.patch, HDFS-6874.03.patch, HDFS-6874.04.patch, 
> HDFS-6874.05.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803659#comment-15803659
 ] 

Hadoop QA commented on HDFS-6874:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 448 unchanged - 1 fixed = 449 total (was 449) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-6874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845945/HDFS-6874.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83d46687fa89 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4a659ff |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18046/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18046/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18046/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: 

[jira] [Updated] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-01-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-6874:
--
Attachment: HDFS-6874.05.patch

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.02.patch, HDFS-6874.03.patch, HDFS-6874.04.patch, 
> HDFS-6874.05.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-01-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Status: In Progress  (was: Patch Available)

To add unit tests for HttpFSServerWebServer

> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11202) httpfs.sh will not run when temp dir does not exist

2017-01-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-11202.
---
   Resolution: Duplicate
Fix Version/s: 3.0.0-alpha2

HDFS-10860 does fix this issue.

> httpfs.sh will not run when temp dir does not exist
> ---
>
> Key: HDFS-11202
> URL: https://issues.apache.org/jira/browse/HDFS-11202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
>
> From {{httpfs-localhost.2016-12-04.log}}:
> {noformat}
> INFO: ERROR: S01: Dir 
> [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] 
> does not exist
> Dec 04, 2016 7:04:46 PM org.apache.catalina.core.StandardContext listenerStart
> SEVERE: Exception sending context initialized event to listener instance of 
> class org.apache.hadoop.fs.http.server.HttpFSServerWebApp
> java.lang.RuntimeException: org.apache.hadoop.lib.server.ServerException: 
> S01: Dir 
> [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] 
> does not exist
> at 
> org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:161)
> at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
> at 
> org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
> at 
> org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507)
> at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
> at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
> at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
> at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
> at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
> at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
> at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
> at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525)
> at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
> at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
> at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
> Caused by: org.apache.hadoop.lib.server.ServerException: S01: Dir 
> [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] 
> does not exist
> at org.apache.hadoop.lib.server.Server.verifyDir(Server.java:400)
> at org.apache.hadoop.lib.server.Server.init(Server.java:349)
> at 
> org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100)
> at 
> org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158)
> ... 24 more
> {noformat}
> Create the temp dir manually, httpfs.sh works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-05 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803589#comment-15803589
 ] 

Ming Ma commented on HDFS-9391:
---

Then for that specific case when 
{{DecommissionManager#Monitor#processBlocksInternal}} is processing the 
decommissioning node, NumberReplicas#decommissionedAndDecommissioning() > 0 and 
NumberReplicas#maintenanceReplicas() > 0 are satisfied. Thus both 
decommissionOnlyReplicas and maintenanceOnlyReplicas will be incremented. The 
same applies to the other two entering maintenance nodes.

> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11148) Update DataNode to use StorageLocationChecker at startup

2017-01-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11148:
-
Fix Version/s: 2.9.0

Cherry-picked to branch-2.

> Update DataNode to use StorageLocationChecker at startup
> 
>
> Key: HDFS-11148
> URL: https://issues.apache.org/jira/browse/HDFS-11148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha2
>
>
> The DataNode can use the {{StorageLocationChecker}} introduced by HDFS-9 
> to parallelize checking Storage Locations at process startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803569#comment-15803569
 ] 

Yiqun Lin commented on HDFS-11282:
--

Thanks [~arpitagarwal].

> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2017-01-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9:
-
Fix Version/s: 2.9.0

Cherry-picked to branch-2.

> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha2
>
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-05 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803555#comment-15803555
 ] 

Manoj Govindassamy commented on HDFS-9391:
--

>> But NumberReplicas represents the state of all replicas.

Thats Right. 


>> Thus for the case "One replica is decommissioning and two replicas of the 
>> same block are entering maintenance", 
>> NumberReplicas#decommissionedAndDecommissioning == 1, 
>> NumberReplicas#maintenanceReplicas() == 2.

Yes, exactly. 


> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9483) Documentation does not cover use of "swebhdfs" as URL scheme for SSL-secured WebHDFS.

2017-01-05 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803543#comment-15803543
 ] 

Surendra Singh Lilhore commented on HDFS-9483:
--

Thanks [~cnauroth] for review and commit. Thanks [~brahmareddy] for review..

> Documentation does not cover use of "swebhdfs" as URL scheme for SSL-secured 
> WebHDFS.
> -
>
> Key: HDFS-9483
> URL: https://issues.apache.org/jira/browse/HDFS-9483
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-9483.001.patch, HDFS-9483.002.patch, HDFS-9483.patch
>
>
> If WebHDFS is secured with SSL, then you can use "swebhdfs" as the scheme in 
> a URL to access it.  The current documentation does not state this anywhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-05 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803538#comment-15803538
 ] 

Ming Ma edited comment on HDFS-9391 at 1/6/17 4:41 AM:
---

A given replica is only in one admin state, normal, decommission or 
maintenance. But {{NumberReplicas}} represents the state of all replicas. Thus 
for the case "One replica is decommissioning and two replicas of the same block 
are entering maintenance", {{NumberReplicas#decommissionedAndDecommissioning == 
1}}, {{NumberReplicas#maintenanceReplicas() == 2}}. No?


was (Author: mingma):
A given replica is only in one state, either decommission or maintenance. But 
{{NumberReplicas}} represents the state of all replicas. Thus for the case "One 
replica is decommissioning and two replicas of the same block are entering 
maintenance", {{NumberReplicas#decommissionedAndDecommissioning == 1}}, 
{{NumberReplicas#maintenanceReplicas() == 2}}. No?

> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-05 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803538#comment-15803538
 ] 

Ming Ma commented on HDFS-9391:
---

A given replica is only in one state, either decommission or maintenance. But 
{{NumberReplicas}} represents the state of all replicas. Thus for the case "One 
replica is decommissioning and two replicas of the same block are entering 
maintenance", {{NumberReplicas#decommissionedAndDecommissioning == 1}}, 
{{NumberReplicas#maintenanceReplicas() == 2}}. No?

> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()

2017-01-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803531#comment-15803531
 ] 

Yiqun Lin commented on HDFS-11291:
--

Thanks [~surendrasingh] for updateing the patch. Now I see there are some 
checkstyle and whitespace warnings generated, would you have a clean up?
+1 once that are addressed. Please wait binding +1 from others. Thanks.

> Avoid unnecessary edit log for setStoragePolicy() and setReplication()
> --
>
> Key: HDFS-11291
> URL: https://issues.apache.org/jira/browse/HDFS-11291
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch
>
>
> We are setting the storage policy for file without checking the current 
> policy of file for avoiding extra getStoragePolicy() rpc call. Currently 
> namenode is not checking the current storage policy before setting new one 
> and adding edit logs. I think if the old and new storage policy is same we 
> can avoid set operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803493#comment-15803493
 ] 

Hudson commented on HDFS-11282:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11080 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11080/])
HDFS-11282. Document the missing metrics of DataNode Volume IO (arp: rev 
4a659ff40fca7c263d62ac7514afc100a4dbb1ed)
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md


> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11282:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

I committed this to trunk. Thanks for the contribution [~linyiqun]. 

> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-01-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Attachment: HDFS-10860.004.patch

Patch 004
- Update CommandsManual.md and SecureMode.md

TESTING DONE
- Bats regression tests https://github.com/jzhuge/hadoop-bats-tests in insecure 
and ssl mode
- Verify docs


> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-01-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Status: Patch Available  (was: In Progress)

> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11253) FileInputStream leak on failure path in BlockSender

2017-01-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11253:
-
Target Version/s:   (was: 3.0.0-alpha2)

> FileInputStream leak on failure path in BlockSender
> ---
>
> Key: HDFS-11253
> URL: https://issues.apache.org/jira/browse/HDFS-11253
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11253.01.patch
>
>
> The BlockSender constructor should close the blockIn and checksumIn streams 
> here:
> {code}
> 405:   blockIn = datanode.data.getBlockInputStream(block, offset); // 
> seek to offset
> 406:   ris = new ReplicaInputStreams(
> 407:   blockIn, checksumIn, volumeRef, fileIoProvider);
> 408: } catch (IOException ioe) {
> 409:   IOUtils.closeStream(this);
> 410:   throw ioe;
> 411: }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11253) FileInputStream leak on failure path in BlockSender

2017-01-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11253:
-
Affects Version/s: 3.0.0-alpha2

> FileInputStream leak on failure path in BlockSender
> ---
>
> Key: HDFS-11253
> URL: https://issues.apache.org/jira/browse/HDFS-11253
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11253.01.patch
>
>
> The BlockSender constructor should close the blockIn and checksumIn streams 
> here:
> {code}
> 405:   blockIn = datanode.data.getBlockInputStream(block, offset); // 
> seek to offset
> 406:   ris = new ReplicaInputStreams(
> 407:   blockIn, checksumIn, volumeRef, fileIoProvider);
> 408: } catch (IOException ioe) {
> 409:   IOUtils.closeStream(this);
> 410:   throw ioe;
> 411: }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11194) Maintain aggregated peer performance metrics on NameNode

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803341#comment-15803341
 ] 

Hadoop QA commented on HDFS-11194:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 53s{color} | {color:orange} root: The patch generated 7 new + 1666 unchanged 
- 5 fixed = 1673 total (was 1671) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11194 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845901/HDFS-11194.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux d2dc9ae0acbf 3.13.0-106-generic #153-Ubuntu SMP 

[jira] [Commented] (HDFS-11243) [SPS]: Add a protocol command from NN to DN for dropping the SPS work and queues

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803302#comment-15803302
 ] 

Hadoop QA commented on HDFS-11243:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 25s{color} | 
{color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 151 unchanged - 0 fixed = 155 total (was 151) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 7 
unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11243 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845273/HDFS-11243-HDFS-10285-00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 5cb3275f55db 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 43a7f04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18044/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18044/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18044/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 

[jira] [Commented] (HDFS-11293) FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation

2017-01-05 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803261#comment-15803261
 ] 

Yuanbo Liu commented on HDFS-11293:
---

[~umamaheswararao] Thanks for your response. I'll attach a test case for this 
issue.

> FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation
> ---
>
> Key: HDFS-11293
> URL: https://issues.apache.org/jira/browse/HDFS-11293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Critical
>
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica 
> info by block pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica 
> in the cluster. Very likely we have to move block from B[DISK] to A[SSD], at 
> this time, datanode A throws ReplicaAlreadyExistsException and it's not a 
> correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10675) Datanode support to read from external stores.

2017-01-05 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803257#comment-15803257
 ] 

Virajith Jalaparti commented on HDFS-10675:
---

Hi [~jiajia], v3 patch works on top of HDFS-9806 in apache. Can you try it on 
that? I haven't rebased onto trunk in a while. I will update the patch once I 
rebase 9806 on trunk. 

> Datanode support to read from external stores. 
> ---
>
> Key: HDFS-10675
> URL: https://issues.apache.org/jira/browse/HDFS-10675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10675-HDFS-9806.001.patch, 
> HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch
>
>
> This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external 
> stores, along with enabling the Datanode to read from such stores using a 
> {{ProvidedReplica}} and a {{ProvidedVolume}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803251#comment-15803251
 ] 

Hadoop QA commented on HDFS-11282:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11282 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845916/HDFS-11282.004.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux abc52231830b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0b8a7c1 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18043/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9569) Log the name of the fsimage being loaded for better supportability

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9569:
-
Fix Version/s: 2.8.0

> Log the name of the fsimage being loaded for better supportability
> --
>
> Key: HDFS-9569
> URL: https://issues.apache.org/jira/browse/HDFS-9569
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9569.001.patch, HDFS-9569.002.patch, 
> HDFS-9569.003.patch, HDFS-9569.004.patch, HDFS-9569.005.patch
>
>
> When NN starts to load fsimage, it does
> {code}
>  void loadFSImageFile(FSNamesystem target, MetaRecoveryContext recovery,
>   FSImageFile imageFile, StartupOption startupOption) throws IOException {
>   LOG.debug("Planning to load image :\n" + imageFile);
>   ..
> long txId = loader.getLoadedImageTxId();
> LOG.info("Loaded image for txid " + txId + " from " + curFile);
> {code}
> A debug msg is issued at the beginning with the fsimage file name, then at 
> the end an info msg is issued after loading.
> If the fsimage loading failed due to corrupted fsimage (see HDFS-9406), we 
> don't see the first msg. It'd be helpful to always be able to see from NN 
> logs what fsimage file it's loading.
> Two improvements:
> 1. Change the above debug to info
> 2. If exception happens when loading fsimage, be sure to report the fsimage 
> name being loaded in the error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9648) TestStartup.testImageChecksum is broken by HDFS-9569's message change

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9648:
-
Fix Version/s: 2.8.0

> TestStartup.testImageChecksum is broken by HDFS-9569's message change
> -
>
> Key: HDFS-9648
> URL: https://issues.apache.org/jira/browse/HDFS-9648
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: test
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9648.001.patch
>
>
> I saw the Jenkins log shows TestStartup.testImageChecksum has been failing 
> consecutively 5 times.
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2724/testReport/org.apache.hadoop.hdfs.server.namenode/TestStartup/testImageChecksum/
> Seems like HDFS-9569 by Yongjun changed exception message, and this test was 
> looking for the exact message.
> Expected to find 'Failed to load an FSImage file!' but got unexpected 
> exception:java.io.IOException: Failed to load FSImage file, see error(s) 
> above for more info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8521) Add @VisibleForTesting annotation to {{BlockPoolSlice#selectReplicaToDelete}}

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8521:
-
Fix Version/s: 2.8.0

> Add @VisibleForTesting annotation to {{BlockPoolSlice#selectReplicaToDelete}}
> -
>
> Key: HDFS-8521
> URL: https://issues.apache.org/jira/browse/HDFS-8521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Trivial
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8521.001.patch
>
>
> Add @VisibleForTesting annotation to {{BlockPoolSlice#selectReplicaToDelete}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-05 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11282:
-
Attachment: HDFS-11282.004.patch

> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803203#comment-15803203
 ] 

Yiqun Lin commented on HDFS-11282:
--

Thanks [~arpiagariu] for the review and comment. New patch attached to address 
the comment.

> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8581) ContentSummary on / skips further counts on yielding lock

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8581:
-
Fix Version/s: 2.8.0

> ContentSummary on / skips further counts on yielding lock
> -
>
> Key: HDFS-8581
> URL: https://issues.apache.org/jira/browse/HDFS-8581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: tongshiquan
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HDFS-8581.1.patch, HDFS-8581.2.patch, HDFS-8581.3.patch, 
> HDFS-8581.4.patch
>
>
> If one directory such as "/result" exists about 20 files, then when 
> execute "hdfs dfs -count /", the result will go wrong. For all directories 
> whose name after "/result", file num will not be included.
> My cluster see as below, "/result_1433858936" is the directory exist huge 
> files, and files in "/sparkJobHistory", "/tmp", "/user" are not included
> vm-221:/export1/BigData/current # hdfs dfs -ls /
> 15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled.
> Found 9 items
> -rw-r--r--   3 hdfs   supergroup  0 2015-06-08 12:10 
> /PRE_CREATE_DIR.SUCCESS
> drwxr-x---   - flume  hadoop  0 2015-06-08 12:08 /flume
> drwx--   - hbase  hadoop  0 2015-06-10 15:25 /hbase
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-10 17:19 /hyt
> drwxrwxrwx   - mapred hadoop  0 2015-06-08 12:08 /mr-history
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-09 22:10 
> /result_1433858936
> drwxrwxrwx   - spark  supergroup  0 2015-06-10 19:15 /sparkJobHistory
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-08 12:14 /tmp
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-09 21:57 /user
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /
> 15/06/11 11:00:24 INFO hdfs.PeerCache: SocketCache disabled.
> 1043   171536 1756375688 /
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /PRE_CREATE_DIR.SUCCESS
> 15/06/11 11:00:30 INFO hdfs.PeerCache: SocketCache disabled.
>01  0 /PRE_CREATE_DIR.SUCCESS
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /flume
> 15/06/11 11:00:41 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /flume
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hbase
> 15/06/11 11:00:49 INFO hdfs.PeerCache: SocketCache disabled.
>   36   18  14807 /hbase
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hyt
> 15/06/11 11:01:09 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /hyt
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /mr-history
> 15/06/11 11:01:18 INFO hdfs.PeerCache: SocketCache disabled.
>30  0 /mr-history
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /result_1433858936
> 15/06/11 11:01:29 INFO hdfs.PeerCache: SocketCache disabled.
> 1001   171517 1756360881 /result_1433858936
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /sparkJobHistory
> 15/06/11 11:01:41 INFO hdfs.PeerCache: SocketCache disabled.
>13  21785 /sparkJobHistory
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /tmp
> 15/06/11 11:01:48 INFO hdfs.PeerCache: SocketCache disabled.
>   176  35958 /tmp
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /user
> 15/06/11 11:01:55 INFO hdfs.PeerCache: SocketCache disabled.
>   121  19077 /user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9043) Doc updation for commands in HDFS Federation

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9043:
-
Fix Version/s: 2.8.0

> Doc updation for commands in HDFS Federation
> 
>
> Key: HDFS-9043
> URL: https://issues.apache.org/jira/browse/HDFS-9043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-9043-1.patch, HDFS-9043-branch-2-1.patch, 
> HDFS-9043-branch-2.7.0-1.patch
>
>
> 1. command is wrong 
> {noformat}
>  $HADOOP_PREFIX/bin/hdfs dfsadmin -refreshNameNode 
> :
> {noformat}
> Correct command is : hdfs dfsadmin -refreshNameNode's'
> 2.command is wrong 
> {noformat}
>  $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script 
> $HADOOP_PREFIX/bin/hdfs start balancer 
> {noformat}
> Correct command is : *start-balancer.sh -policy*
> 3. Reference link to balancer for further details is wrong
> {noformat}
> Note that Balancer only balances the data and does not balance the namespace. 
> For the complete command usage, see balancer.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9042) Update document for the Storage policy name

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9042:
-
Fix Version/s: 2.8.0

> Update document for the Storage policy name
> ---
>
> Key: HDFS-9042
> URL: https://issues.apache.org/jira/browse/HDFS-9042
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-9042.1.patch
>
>
> Storage policy name :
> Incorrect : "Lasy_Persist" 
> Correct   : "Lazy_Persist" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8633) Fix setting of dfs.datanode.readahead.bytes in hdfs-default.xml to match DFSConfigKeys

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8633:
-
Fix Version/s: 2.8.0

> Fix setting of dfs.datanode.readahead.bytes in hdfs-default.xml to match 
> DFSConfigKeys
> --
>
> Key: HDFS-8633
> URL: https://issues.apache.org/jira/browse/HDFS-8633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie, supportability
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8633.001.patch
>
>
> Found this using the XML/Config verifier.  One of these properties has two 
> digits swapped.
>   XML Property: dfs.datanode.readahead.bytes
>   XML Value:4193404
>   Config Name:  DFS_DATANODE_READAHEAD_BYTES_DEFAULT
>   Config Value: 4194304
> What is the intended value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11221) Have StorageDirectory return Optional instead of File/null

2017-01-05 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803201#comment-15803201
 ] 

Jiajia Li commented on HDFS-11221:
--

Hi, [~ehiggs], can I take this jira?

> Have StorageDirectory return Optional instead of File/null
> 
>
> Key: HDFS-11221
> URL: https://issues.apache.org/jira/browse/HDFS-11221
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Ewan Higgs
>Priority: Minor
>
> In HDFS-10675, {{StorageDirectory.root}} can be {{null}} because {{PROVIDED}} 
> storage locations will not have any directories associated with them. Hence, 
> we need to add checks to StorageDirectory to make sure we handle this. This 
> would also lead to changes in code that call {{StorageDirectory.getRoot}}, 
> {{StorageDirectory.getCurrentDir}}, {{StorageDirectory.getVersionFile}} etc. 
> as the return value can be {{nul}}l (if {{StorageDirectory.root}} is null).
> The proposal to handle this is to change the return type of the above 
> functions to {{Optional}}. According to my preliminary check, this will 
> result in changes in ~70 places, which is why it's not appropriate to put it 
> in the patch for HDFS-10675. But it is certainly a valuable fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8101) DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8101:
-
Fix Version/s: 2.8.0

> DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at 
> runtime
> ---
>
> Key: HDFS-8101
> URL: https://issues.apache.org/jira/browse/HDFS-8101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-8101.1.patch.txt
>
>
> Previously, all references to DFSConfigKeys in DFSClient were compile time 
> constants which meant that normal users of DFSClient wouldn't resolve 
> DFSConfigKeys at run time. As of HDFS-7718, DFSClient has a reference to a 
> member of DFSConfigKeys that isn't compile time constant 
> (DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT).
> Since the class must be resolved now, this particular member
> {code}
> public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
> AuthFilter.class.getName();
> {code}
> means that javax.servlet.Filter needs to be on the classpath.
> javax-servlet-api is one of the properly listed dependencies for HDFS, 
> however if we replace {{AuthFilter.class.getName()}} with the equivalent 
> String literal then downstream folks can avoid including it while maintaining 
> compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8099) Change "DFSInputStream has been closed already" message to debug log level

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8099:
-
Fix Version/s: 2.8.0

> Change "DFSInputStream has been closed already" message to debug log level
> --
>
> Key: HDFS-8099
> URL: https://issues.apache.org/jira/browse/HDFS-8099
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8099.000.patch, HDFS-8099.001.patch
>
>
> The hadoop fs -get command always shows this warning:
> {noformat}
> $ hadoop fs -get /data/schemas/sfdc/BusinessHours-2014-12-09.avsc
> 15/04/06 06:22:19 WARN hdfs.DFSClient: DFSInputStream has been closed already
> {noformat}
> This was introduced by HDFS-7494. The easiest thing is to just remove the 
> warning from the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10347) Namenode report bad block method doesn't log the bad block or datanode.

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10347:
--
Fix Version/s: 2.8.0

> Namenode report bad block method doesn't log the bad block or datanode.
> ---
>
> Key: HDFS-10347
> URL: https://issues.apache.org/jira/browse/HDFS-10347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-10347.patch
>
>
> Currently the method {{FSNamesystem#reportBadBlocks}} doesn't log any 
> information regarding the bad block id or the datanode on which corrupt block 
> is detected.
> It would be helpful to log that information to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8384:
-
Fix Version/s: 2.8.0

> Allow NN to startup if there are files having a lease but are not under 
> construction
> 
>
> Key: HDFS-8384
> URL: https://issues.apache.org/jira/browse/HDFS-8384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Jing Zhao
>Priority: Minor
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, 
> HDFS-8384.000.patch
>
>
> When there are files having a lease but are not under construction, NN will 
> fail to start up with
> {code}
> 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for 
> /hadoop/hdfs/namenode
> java.lang.IllegalStateException
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124)
> ...
> {code}
> The actually problem is that the image could be corrupted by bugs like 
> HDFS-7587.  We should have an option/conf to allow NN to start up so that the 
> problematic files could possibly be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8405) Fix a typo in NamenodeFsck

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8405:
-
Fix Version/s: 2.8.0

> Fix a typo in NamenodeFsck
> --
>
> Key: HDFS-8405
> URL: https://issues.apache.org/jira/browse/HDFS-8405
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8405.1.patch
>
>
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY below should not be quoted.
> {code}
>   res.append("\n  
> ").append("DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY:\t")
>  .append(minReplication);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10186) DirectoryScanner: Improve logs by adding full path of both actual and expected block directories

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10186:
--
Fix Version/s: 2.8.0

> DirectoryScanner: Improve logs by adding full path of both actual and 
> expected block directories
> 
>
> Key: HDFS-10186
> URL: https://issues.apache.org/jira/browse/HDFS-10186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-10186-001.patch
>
>
> As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-7648?focusedCommentId=15195908=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15195908],
>  this jira is to improve directory scanner log by adding the wrong and 
> correct directory path so that admins can take necessary actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10319) Balancer should not try to pair storages with different types

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10319:
--
Fix Version/s: 2.8.0

> Balancer should not try to pair storages with different types
> -
>
> Key: HDFS-10319
> URL: https://issues.apache.org/jira/browse/HDFS-10319
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: h10319_20160420.patch
>
>
> This is a performance bug – Balancer may pair a source datanode and a target 
> datanode with different storage types. Fortunately, it will fail schedule any 
> blocks in such pair since it will find out that the storage types are not 
> matched later on.
> The bug won't lead to incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7546) Document, and set an accepting default for dfs.namenode.kerberos.principal.pattern

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7546:
-
Fix Version/s: 2.8.0

> Document, and set an accepting default for 
> dfs.namenode.kerberos.principal.pattern
> --
>
> Key: HDFS-7546
> URL: https://issues.apache.org/jira/browse/HDFS-7546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.1.1-beta
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-7546.addendum.001.patch, HDFS-7546.patch
>
>
> This config is used in the SaslRpcClient, and the no-default breaks 
> cross-realm trust principals being used at clients.
> Current location: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L309
> The config should be documented and the default should be set to * to 
> preserve the prior-to-introduction behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8151) Always use snapshot path as source when invalid snapshot names are used for diff based distcp

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8151:
-
Fix Version/s: 2.8.0

> Always use snapshot path as source when invalid snapshot names are used for 
> diff based distcp
> -
>
> Key: HDFS-8151
> URL: https://issues.apache.org/jira/browse/HDFS-8151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8151.000.patch
>
>
> This is a bug reported by [~ssreenivasan]:
> HDFS-8036 makes the diff-based distcp use snapshot path as the source. This 
> should also happen when
> # invalid snapshot names are provided as distcp parameters thus the diff 
> report computation on the target cluster fails
> # there is modification happening in the target cluster thus 
> {{checkNoChange}} returns false
> In other cases like source and target FS are not DistributedFileSystem, we 
> should throw exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8204) Mover/Balancer should not schedule two replicas to the same DN

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8204:
-
Fix Version/s: 2.8.0

> Mover/Balancer should not schedule two replicas to the same DN
> --
>
> Key: HDFS-8204
> URL: https://issues.apache.org/jira/browse/HDFS-8204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8204.001.patch, HDFS-8204.002.patch, 
> HDFS-8204.003.patch
>
>
> Balancer moves blocks between Datanode(Ver. <2.6 ).
> Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in 
> the new version(Ver. >=2.6) .
> function
> {code}
> class DBlock extends Locations
> DBlock.isLocatedOn(StorageGroup loc)
> {code}
> -is flawed, may causes 2 replicas ends in same node after running balance.-
> For example:
> We have 2 nodes. Each node has two storages.
> We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
> We have a block with ONE_SSD storage policy.
> The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
> Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
> Otherwise DN1 has 2 replicas.
> --
> UPDATE(Thanks [~szetszwo] for pointing it out):
> {color:red}
> This bug will *NOT* causes 2 replicas end in same node after running balance, 
> thanks to Datanode rejecting it. 
> {color}
> We see a lot of ERROR when running test.
> {code}
> 2015-04-27 10:08:15,809 ERROR datanode.DataNode (DataXceiver.java:run(277)) - 
> host1.foo.com:59537:DataXceiver error processing REPLACE_BLOCK operation  
> src: /127.0.0.1:52532 dst: /127.0.0.1:59537
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-264794661-9.96.1.34-1430100451121:blk_1073741825_1001 already exists in 
> state FINALIZED and thus cannot be created.
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:1447)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:186)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.replaceBlock(DataXceiver.java:1158)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReplaceBlock(Receiver.java:229)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
> at java.lang.Thread.run(Thread.java:722)
> {code}
> The Balancer runs 5~20 times iterations in the test, before it exits.
> It's ineffecient.
> Balancer should not *schedule* it in the first place, even though it'll 
> failed anyway. In the test, it should exit after 5 times iteration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7931) DistributedFIleSystem should not look for keyProvider in cache if Encryption is disabled

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7931:
-
Fix Version/s: 2.8.0

> DistributedFIleSystem should not look for keyProvider in cache if Encryption 
> is disabled 
> -
>
> Key: HDFS-7931
> URL: https://issues.apache.org/jira/browse/HDFS-7931
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-7931.1.patch, HDFS-7931.2.patch, HDFS-7931.2.patch, 
> HDFS-7931.3.patch
>
>
> The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
> {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
> {{KeyProvderCache}} but since the required key, 
> *dfs.encryption.key.provider.uri* is not present (due to encryption being 
> dissabled), it throws an exception.
> {noformat}
> 2015-03-11 23:55:47,849 [JobControl] ER ROR 
> org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a keyProvider !!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9574:
-
Fix Version/s: 2.8.0

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 2.7.2, 2.6.4, 3.0.0-alpha1
>
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br26.patch, HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9033) dfsadmin -metasave prints "NaN" for cache used%

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9033:
-
Fix Version/s: 2.8.0

> dfsadmin -metasave prints "NaN" for cache used%
> ---
>
> Key: HDFS-9033
> URL: https://issues.apache.org/jira/browse/HDFS-9033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-9033.patch
>
>
> In a metasave file, "NaN" is getting printed for cacheused% --
> For metasave file --
> hdfs dfsadmin -metasave fnew
> vi fnew
> Metasave: Number of datanodes: 3
> DN1:50076 IN 211378954240(196.86 GB) 2457942(2.34 MB) 0.00% 
> 185318637568(172.59 GB) 0(0 B) 0(0 B) {color:red}NaN% {color}0(0 B) Mon Sep 
> 07 17:22:42
> In DN report, Cache is  -
> hdfs dfsadmin -report
> Decommission Status : Normal
> Configured Capacity: 211378954240 (196.86 GB)
> DFS Used: 3121152 (2.98 MB)
> Non DFS Used: 16376107008 (15.25 GB)
> DFS Remaining: 194999726080 (181.61 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 92.25%
> {color:red}
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9740) Use a reasonable limit in DFSTestUtil.waitForMetric()

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9740:
-
Fix Version/s: 2.8.0

> Use a reasonable limit in DFSTestUtil.waitForMetric()
> -
>
> Key: HDFS-9740
> URL: https://issues.apache.org/jira/browse/HDFS-9740
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Kihwal Lee
>Assignee: Chang Li
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9740-branch-2.7.patch, HDFS-9740-branch-2.patch, 
> HDFS-9740.patch
>
>
> If test is detecting a bug, it will probably hit the long surefire timeout 
> because the max is {{Integer.MAX_VALUE}}.  Use something more realistic. The 
> default jmx update interval is 10 seconds, so something like 60 seconds 
> should be more than enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9730) Storage ID update does not happen when there is a layout change

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9730:
-
Fix Version/s: 2.8.0

> Storage ID update does not happen when there is a layout change
> ---
>
> Key: HDFS-9730
> URL: https://issues.apache.org/jira/browse/HDFS-9730
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Kihwal Lee
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: h9730_20160202.patch, h9730_20160203.patch
>
>
> HDFS-9654 will cause test failures when we increment the datanode layout 
> version next time.
> {noformat}
> TestDatanodeStartupFixesLegacyStorageIDs#testUpgradeFrom22via26FixesStorageIDs
> TestDatanodeStartupFixesLegacyStorageIDs#testUpgradeFrom22FixesStorageIDs
> {noformat}
> This is because createStorageID() is no longer called when it goes through 
> the layout upgrade path. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9690) ClientProtocol.addBlock is not idempotent after HDFS-8071

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9690:
-
Fix Version/s: 2.8.0

> ClientProtocol.addBlock is not idempotent after HDFS-8071
> -
>
> Key: HDFS-9690
> URL: https://issues.apache.org/jira/browse/HDFS-9690
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: h9690_20160124.patch, h9690_20160124b.patch, 
> h9690_20160124b_branch-2.7.patch
>
>
> TestDFSClientRetries#testIdempotentAllocateBlockAndClose can illustrate the 
> bug. It failed in the following builds.
> - 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14188/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testIdempotentAllocateBlockAndClose/
> - 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14201/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testIdempotentAllocateBlockAndClose/
> - 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14202/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testIdempotentAllocateBlockAndClose/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9431) DistributedFileSystem#concat fails if the target path is relative.

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9431:
-
Fix Version/s: 2.8.0

> DistributedFileSystem#concat fails if the target path is relative.
> --
>
> Key: HDFS-9431
> URL: https://issues.apache.org/jira/browse/HDFS-9431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Kazuho Fujii
>Assignee: Kazuho Fujii
> Fix For: 2.8.0, 2.7.2, 2.6.3, 3.0.0-alpha1
>
> Attachments: HDFS-9431.001.patch, HDFS-9431.002.patch
>
>
> {{DistributedFileSystem#concat}} fails if the target path is relative.
> The method tries to send a relative path to DFSClient at the first call.
> bq.  dfs.concat(getPathName(trg), srcsStr);
> But, {{getPathName}} failed. It seems that {{trg}} should be {{absF}} like 
> the second call.
> bq.  dfs.concat(getPathName(absF), srcsStr);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9365) Balancer does not work with the HDFS-6376 HA setup

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9365:
-
Fix Version/s: 2.8.0

> Balancer does not work with the HDFS-6376 HA setup
> --
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: h9365_20151119.patch, h9365_20151120.patch, 
> h9365_20160523.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9383) TestByteArrayManager#testByteArrayManager fails

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9383:
-
Fix Version/s: 2.8.0

> TestByteArrayManager#testByteArrayManager fails
> ---
>
> Key: HDFS-9383
> URL: https://issues.apache.org/jira/browse/HDFS-9383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: h9383_20151107.patch, hdfs-9383.log
>
>
> This was seen in the trunk builds
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> {noformat}
> Running org.apache.hadoop.hdfs.util.TestByteArrayManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.539 sec <<< 
> FAILURE!
>  - in org.apache.hadoop.hdfs.util.TestByteArrayManager
> testByteArrayManager(org.apache.hadoop.hdfs.util.TestByteArrayManager)  Time 
> elapsed: 5.409 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but was:<[32: 2/64, free=5]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.hdfs.util.TestByteArrayManager.testByteArrayManager(TestByteArrayManager.java:384)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9533) seen_txid in the shared edits directory is modified during bootstrapping

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9533:
-
Fix Version/s: 2.8.0

> seen_txid in the shared edits directory is modified during bootstrapping
> 
>
> Key: HDFS-9533
> URL: https://issues.apache.org/jira/browse/HDFS-9533
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.6.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9533.patch
>
>
> The last known transaction id is stored in the seen_txid file of all known 
> directories of a NNStorage when starting a new edit segment. However, we have 
> seen a case where it contains an id that falls in the middle of an edit 
> segment. This was the seen_txid file in the sahred edits directory.  The 
> active namenode's local storage was containing valid looking seen_txid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9476) TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9476:
-
Fix Version/s: 2.8.0

> TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail
> -
>
> Key: HDFS-9476
> URL: https://issues.apache.org/jira/browse/HDFS-9476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9476.002.patch, HDFS-9476.01.patch
>
>
> This test occasionally fail. For example, the most recent one is:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2587/
> Error Message
> {noformat}
> Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9516) truncate file fails with data dirs on multiple disks

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9516:
-
Fix Version/s: 2.8.0

> truncate file fails with data dirs on multiple disks
> 
>
> Key: HDFS-9516
> URL: https://issues.apache.org/jira/browse/HDFS-9516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Plamen Jeliazkov
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9516_1.patch, HDFS-9516_2.patch, HDFS-9516_3.patch, 
> HDFS-9516_testFailures.patch, Main.java, truncate.dn.log
>
>
> FileSystem.truncate returns false (no exception) but the file is never closed 
> and not writable after this.
> It seems to be because of copy on truncate which is used because the system 
> is in upgrade state. In this case a rename between devices is attempted.
> See attached log and repro code.
> Probably also affects truncate snapshotted file when copy on truncate is also 
> used.
> Possibly it affects not only truncate but any block recovery.
> I think the problem is in updateReplicaUnderRecovery
> {code}
> ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
> newBlockId, recoveryId, rur.getVolume(), 
> blockFile.getParentFile(),
> newlength);
> {code}
> blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to 
> choose any volume so rur.getVolume() is not where the block is located.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9555) LazyPersistFileScrubber should still sleep if there are errors in the clear progress

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9555:
-
Fix Version/s: 2.8.0

> LazyPersistFileScrubber should still sleep if there are errors in the clear 
> progress
> 
>
> Key: HDFS-9555
> URL: https://issues.apache.org/jira/browse/HDFS-9555
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: 9555-v1.patch
>
>
> If LazyPersistFileScrubber.clearCorruptLazyPersistFiles throw an exception in 
> run(), there will be no sleep logic so it will restart immediately. However 
> it may be still fail so there are too many ERROR logs in namenode said 
> "Ignoring exception in LazyPersistFileScrubber".
> We need sleep if we catch the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9634:
-
Fix Version/s: 2.8.0

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0, 2.7.1, 3.0.0-alpha1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8597) Fix TestFSImage#testZeroBlockSize on Windows

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8597:
-
Fix Version/s: 2.8.0

> Fix TestFSImage#testZeroBlockSize on Windows
> 
>
> Key: HDFS-8597
> URL: https://issues.apache.org/jira/browse/HDFS-8597
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, test
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8597.00.patch, HDFS-8597.01.patch
>
>
> The last portion of the dfs.datanode.data.dir is incorrectly formatted.
> {code}2015-06-14 09:44:37,133 INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:startDataNodes(1413)) - Starting DataNode 0 with 
> dfs.datanode.data.dir: 
> file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
> 2015-06-14 09:44:37,141 ERROR common.Util (Util.java:stringAsURI(50)) - 
> Syntax error in URI 
> file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data.
>  Please check hdfs configuration.
> java.net.URISyntaxException: Illegal character in authority at index 7: 
> file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8615) Correct HTTP method in WebHDFS document

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8615:
-
Fix Version/s: 2.8.0

> Correct HTTP method in WebHDFS document
> ---
>
> Key: HDFS-8615
> URL: https://issues.apache.org/jira/browse/HDFS-8615
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.1
>Reporter: Akira Ajisaka
>Assignee: Brahma Reddy Battula
>  Labels: newbie
> Fix For: 2.8.0, 2.7.2, 2.6.3, 3.0.0-alpha1
>
> Attachments: HDFS-8615.branch-2.6.patch, HDFS-8615.patch
>
>
> For example, {{-X PUT}} should be removed from the following curl command.
> {code:title=WebHDFS.md}
> ### Get ACL Status
> * Submit a HTTP GET request.
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=GETACLSTATUS"
> {code}
> Other than this example, there are several commands which {{-X PUT}} should 
> be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8659) Block scanner INFO message is spamming logs

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8659:
-
Fix Version/s: 2.8.0

> Block scanner INFO message is spamming logs
> ---
>
> Key: HDFS-8659
> URL: https://issues.apache.org/jira/browse/HDFS-8659
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8659.001.patch, HDFS-8659.002.patch
>
>
> We are seeing the following message spam the DN log:
> {quote}
> 2015-06-16 08:51:10,566 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockScanner: Not scanning suspicious 
> block BP-943360218-10.106.148.16-1416571803827:blk_1083076388_9372245 on 
> DS-2ec89056-afb0-459e-b4e0-ac5eaececda3, because the block scanner is 
> disabled.
> {quote}
> Create this jira to change this and other relevant messages to debug level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8772) Fix TestStandbyIsHot#testDatanodeRestarts which occasionally fails

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8772:
-
Fix Version/s: 2.8.0

> Fix TestStandbyIsHot#testDatanodeRestarts which occasionally fails  
> 
>
> Key: HDFS-8772
> URL: https://issues.apache.org/jira/browse/HDFS-8772
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-8772-branch-2.04.patch, HDFS-8772.01.patch, 
> HDFS-8772.02.patch, HDFS-8772.03.patch, HDFS-8772.04.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11598/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11600/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11606/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11608/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11612/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11618/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11650/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11655/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11659/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11663/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11664/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11667/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11669/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11676/testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/11677/testReport/
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testDatanodeRestarts(TestStandbyIsHot.java:188)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7980:
-
Fix Version/s: 2.8.0

> Incremental BlockReport will dramatically slow down the startup of  a namenode
> --
>
> Key: HDFS-7980
> URL: https://issues.apache.org/jira/browse/HDFS-7980
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hui Zheng
>Assignee: Walter Su
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-7980-branch-2.6.1.txt, HDFS-7980.001.patch, 
> HDFS-7980.002.patch, HDFS-7980.003.patch, HDFS-7980.004.patch, 
> HDFS-7980.004.repost.patch
>
>
> In the current implementation the datanode will call the 
> reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
> calling the bpNamenode.blockReport() method. So in a large(several thousands 
> of datanodes) and busy cluster it will slow down(more than one hour) the 
> startup of namenode. 
> {code}
> List blockReport() throws IOException {
> // send block report if timer has expired.
> final long startTime = now();
> if (startTime - lastBlockReport <= dnConf.blockReportInterval) {
>   return null;
> }
> final ArrayList cmds = new ArrayList();
> // Flush any block information that precedes the block report. Otherwise
> // we have a chance that we will miss the delHint information
> // or we will report an RBW replica after the BlockReport already reports
> // a FINALIZED one.
> reportReceivedDeletedBlocks();
> lastDeletedReport = startTime;
> .
> // Send the reports to the NN.
> int numReportsSent = 0;
> int numRPCs = 0;
> boolean success = false;
> long brSendStartTime = now();
> try {
>   if (totalBlockCount < dnConf.blockReportSplitThreshold) {
> // Below split threshold, send all reports in a single message.
> DatanodeCommand cmd = bpNamenode.blockReport(
> bpRegistration, bpos.getBlockPoolId(), reports);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8091:
-
Fix Version/s: 2.8.0

> ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
> before returning to client 
> ---
>
> Key: HDFS-8091
> URL: https://issues.apache.org/jira/browse/HDFS-8091
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8091.1.patch
>
>
> HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
> implementation of which can be plugged-in so that the Attributes (user / 
> group / permission / acls and xattrs) that are returned for an HDFS path can 
> be altered/enhanced by the user specified code before it is returned to the 
> client.
> Unfortunately, it looks like the AclStatus and XAttributes are not properly 
> presented to the user specified {{INodeAttributedProvider}} before it is 
> returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8046) Allow better control of getContentSummary

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8046:
-
Fix Version/s: 2.8.0

> Allow better control of getContentSummary
> -
>
> Key: HDFS-8046
> URL: https://issues.apache.org/jira/browse/HDFS-8046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: 2.6.1-candidate, 2.7.2-candidate
> Fix For: 2.6.1, 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-8046-branch-2.6.1.txt, HDFS-8046.v1.patch
>
>
> On busy clusters, users performing quota checks against a big directory 
> structure can affect the namenode performance. It has become a lot better 
> after HDFS-4995, but as clusters get bigger and busier, it is apparent that 
> we need finer grain control to avoid long read lock causing throughput drop.
> Even with unfair namesystem lock setting, a long read lock (10s of 
> milliseconds) can starve many readers and especially writers. So the locking 
> duration should be reduced, which can be done by imposing a lower 
> count-per-iteration limit in the existing implementation.  But HDFS-4995 came 
> with a fixed amount of sleep between locks. This needs to be made 
> configurable, so that {{getContentSummary()}} doesn't get exceedingly slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8147) Mover should not schedule two replicas to the same DN storage

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8147:
-
Fix Version/s: 2.8.0

> Mover should not schedule two replicas to the same DN storage
> -
>
> Key: HDFS-8147
> URL: https://issues.apache.org/jira/browse/HDFS-8147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.6.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8147.patch, HDFS-8147_1.patch, HDFS-8147_2.patch, 
> HDFS-8147_3.patch, HDFS-8147_4.patch
>
>
> *Scenario:*
> 1. Three DN cluster.  For DNs storage type is like this.
> DN1 : DISK,ARCHIVE
> DN2 : DISK
> DN3 : DISK,ARCHIVE (All DNs are in same rack)
> 2. One file with two replicas (In DN1 and DN2)
> 3. Set file storage policy COLD
> 4. Now execute Mover.
> *Expected Result:* File blocks should move in DN1:ARCHIVE and DN3:ARCHIVE
> *Actual Result:* {{chooseTargetInSameNode()}} move D1:DISK block to 
> D1:ARCHIVE, but in next iteration {{chooseTarget()}} for same rake is 
> selecting again DN1:ARCHIVE for target where already same block exists.
> {{chooseTargetInSameNode()}} and {{chooseTarget()}} should not select the 
> node as target where already same replica exists.  The dispatcher will fail 
> to move block as shown in the log below.  Then, the Mover will try again in 
> next iteration.
> *Logs*
> {code}
> 15/04/15 10:47:17 WARN balancer.Dispatcher: Failed to move 
> blk_1073741852_1028 with size=11990 from 10.19.92.74:50010:DISK to 
> 10.19.92.73:50010:ARCHIVE through 10.19.92.73:50010: Got error, status 
> message opReplaceBlock 
> BP-1258709199-10.19.92.74-1428292615636:blk_1073741852_1028 received 
> exception 
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Replica 
> FinalizedReplica, blk_1073741852_1028, FINALIZED
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8404) Pending block replication can get stuck using older genstamp

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8404:
-
Fix Version/s: 2.8.0

> Pending block replication can get stuck using older genstamp
> 
>
> Key: HDFS-8404
> URL: https://issues.apache.org/jira/browse/HDFS-8404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8404-v0.patch, HDFS-8404-v1.patch
>
>
> If an under-replicated block gets into the pending-replication list, but 
> later the  genstamp of that block ends up being newer than the one originally 
> submitted for replication, the block will fail replication until the NN is 
> restarted. 
> It will be safer if processPendingReplications()  gets up-to-date blockinfo 
> before resubmitting replication work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8081) Split getAdditionalBlock() into two methods.

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8081:
-
Fix Version/s: 2.8.0

> Split getAdditionalBlock() into two methods.
> 
>
> Key: HDFS-8081
> URL: https://issues.apache.org/jira/browse/HDFS-8081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: HDFS-8081-01.patch, HDFS-8081-02.patch, 
> HDFS-8081-03.patch
>
>
> A minor refactoring to introduce two methods one corresponding to Part I and 
> another to Part II to make {{getAdditionalBlock()}} more readable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8361) Choose SSD over DISK in block placement

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8361:
-
Fix Version/s: 2.8.0

> Choose SSD over DISK in block placement
> ---
>
> Key: HDFS-8361
> URL: https://issues.apache.org/jira/browse/HDFS-8361
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: h8361_20150508.patch, h8361_20150612.patch
>
>
> BlockPlacementPolicyDefault chooses the StorageType by iterating the given 
> StorageType EnumMap in its natural order (the order in which the enum 
> constants are declared).  So DISK will be chosen over SSD in One-SSD policy 
> since DISK is declared before SSD as shown below.  We should choose SSD first.
> {code}
> public enum StorageType {
>   DISK(false),
>   SSD(false),
>   ARCHIVE(false),
>   RAM_DISK(true);
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8153) Error Message points to wrong parent directory in case of path component name length error

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8153:
-
Fix Version/s: 2.8.0

> Error Message points to wrong parent directory in case of path component name 
> length error
> --
>
> Key: HDFS-8153
> URL: https://issues.apache.org/jira/browse/HDFS-8153
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha1
>
> Attachments: hdfs-8153.001.patch
>
>
> If the name component length is greater than the permitted length, the error 
> message points to wrong parent directory for mkdir and touchz.
> Here are examples where the parent directory name is in error message. In 
> this example dfs.namenode.fs-limits.max-component-length is set to 19.
> {code}
> hdfs dfs -mkdir /user/hrt_qa/FileNameLength/really_big_name_dir01
> mkdir: The maximum path component name limit of really_big_name_dir01 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=21
> {code}
> The expected value for the directory was _/user/hrt_qa/FileNameLength_. The 
> same behavior is observed for touchz
> {code}
> hdfs dfs -touchz /user/hrt_qa/FileNameLength/really_big_name_0004
> touchz: The maximum path component name limit of really_big_name_0004 in 
> directory /user/hrt_qa/ is exceeded: limit=19 length=20
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >