[jira] [Commented] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590245#comment-14590245
 ] 

Akira AJISAKA commented on HDFS-6249:
-

+1, thanks [~surendrasingh].

 Output AclEntry in PBImageXmlWriter
 ---

 Key: HDFS-6249
 URL: https://issues.apache.org/jira/browse/HDFS-6249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: surendra singh lilhore
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6249.patch, HDFS-6249_1.patch


 It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7390) Provide JMX metrics per storage type

2015-06-17 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-7390:
---
Attachment: HDFS-7390-004.patch

Attaching a patch after fixing checkstyle warnings.

 Provide JMX metrics per storage type
 

 Key: HDFS-7390
 URL: https://issues.apache.org/jira/browse/HDFS-7390
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.5.2
Reporter: Benoy Antony
Assignee: Benoy Antony
  Labels: BB2015-05-TBR
 Attachments: HDFS-7390-003.patch, HDFS-7390-004.patch, 
 HDFS-7390.patch, HDFS-7390.patch


 HDFS-2832  added heterogeneous support. In a cluster with different storage 
 types, it is useful to have metrics per storage type. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-17 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590296#comment-14590296
 ] 

Jing Zhao commented on HDFS-8446:
-

+1 for the 003 patch.

 Separate safemode related operations in GetBlockLocations()
 ---

 Key: HDFS-8446
 URL: https://issues.apache.org/jira/browse/HDFS-8446
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
 HDFS-8446.002.patch, HDFS-8446.003.patch


 Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
 the NN is in SafeMode. This jira proposes to refactor the code to improve 
 readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6249:

Hadoop Flags: Reviewed

 Output AclEntry in PBImageXmlWriter
 ---

 Key: HDFS-6249
 URL: https://issues.apache.org/jira/browse/HDFS-6249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: surendra singh lilhore
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6249.patch, HDFS-6249_1.patch


 It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8606) Cleanup DFSOutputStream by removing unwanted changes

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590337#comment-14590337
 ] 

Hudson commented on HDFS-8606:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8606. Cleanup DFSOutputStream by removing unwanted changes from HDFS-8386. 
Contributed by Rakesh R (szetszwo: rev d4929f448f95815af99100780a08b172e0262c17)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


 Cleanup DFSOutputStream by removing unwanted changes
 

 Key: HDFS-8606
 URL: https://issues.apache.org/jira/browse/HDFS-8606
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8606-00.patch


 This jira is to clean up few changes done as part of HDFS-8386. As per 
 [~szetszwo] comments, it will affect the write performance. Please see the 
 discussion 
 [here|https://issues.apache.org/jira/browse/HDFS-8386?focusedCommentId=14575386page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14575386]
 Needs to do the following changes as part of this jira:
 #  remove “synchronized from getStreamer() since it may unnecessarily block 
 the caller
 # remove setStreamer(..) which is currently not used. We may add it in the 
 HDFS-7285 branch and see how to do synchronization correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8591) Remove support for deprecated configuration key dfs.namenode.decommission.nodes.per.interval

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590346#comment-14590346
 ] 

Hudson commented on HDFS-8591:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8591. Remove support for deprecated configuration key 
dfs.namenode.decommission.nodes.per.interval. (wang: rev 
a3990ca41415515b986a41dacefceee1f05622f8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java


 Remove support for deprecated configuration key 
 dfs.namenode.decommission.nodes.per.interval
 

 Key: HDFS-8591
 URL: https://issues.apache.org/jira/browse/HDFS-8591
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hdfs-8591.001.patch


 dfs.namenode.decommission.nodes.per.interval is deprecated in branch-2 and 
 can be removed in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8551) Fix hdfs datanode CLI usage message

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590338#comment-14590338
 ] 

Hudson commented on HDFS-8551:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8551. Fix hdfs datanode CLI usage message. Contributed by Brahma Reddy 
Battula. (xyao: rev 9cd5ad9d84e46295249877ade50cd49c34b9bf12)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix hdfs datanode CLI usage message
 ---

 Key: HDFS-8551
 URL: https://issues.apache.org/jira/browse/HDFS-8551
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8551-002.patch, HDFS-8551-003.patch, HDFS-8551.patch


 There are two issues with current hdfs datanode usage message below.
 {code}
 Usage: java DataNode [-regular | -rollback]
 -regular : Normal DataNode startup (default).
 -rollback: Rollback a standard or rolling upgrade.
   Refer to HDFS documentation for the difference between standard
   and rolling upgrades.
 {code}
 1. java DataNode should be hdfs datanode
 2. rollingupgrace option is missing but it is document correctly in the 
 [link|http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#datanode].
 {code}
 Usage: hdfs datanode [-regular | -rollback | -rollingupgrace rollback]
 COMMAND_OPTIONDescription
 -regular  Normal datanode startup (default).
 -rollback Rollback the datanode to the previous version. This should be 
 used after stopping the datanode and distributing the old hadoop version.
 -rollingupgrade rollback  Rollback a rolling upgrade operation.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4660) Block corruption can happen during pipeline recovery

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590334#comment-14590334
 ] 

Hudson commented on HDFS-4660:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-4660. Block corruption can happen during pipeline recovery. Contributed by 
Kihwal Lee. (kihwal: rev c74517c46bf00af408ed866b6577623cdec02de1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


 Block corruption can happen during pipeline recovery
 

 Key: HDFS-4660
 URL: https://issues.apache.org/jira/browse/HDFS-4660
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Peng Zhang
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 2.7.1

 Attachments: HDFS-4660.patch, HDFS-4660.patch, HDFS-4660.v2.patch


 pipeline DN1  DN2  DN3
 stop DN2
 pipeline added node DN4 located at 2nd position
 DN1  DN4  DN3
 recover RBW
 DN4 after recover rbw
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134144
   getBytesOnDisk() = 134144
   getVisibleLength()= 134144
 end at chunk (134144/512=262)
 DN3 after recover rbw
 2013-04-01 21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
  21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134028 
   getBytesOnDisk() = 134028
   getVisibleLength()= 134028
 client send packet after recover pipeline
 offset=133632  len=1008
 DN4 after flush 
 2013-04-01 21:02:31,779 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1063
 // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is 
 1063.
 DN3 after flush
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005, 
 type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219, 
 lastPacketInBlock=false, offsetInBlock=134640, 
 ackEnqueueNanoTime=8817026136871545)
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Changing 
 meta file offset of block 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005 from 
 1055 to 1051
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1059
 After checking meta on DN4, I found checksum of chunk 262 is duplicated, but 
 data not.
 Later after block was finalized, DN4's scanner detected bad block, and then 
 reported it to NM. NM send a command to delete this block, and replicate this 
 block from other DN in pipeline to satisfy duplication num.
 I think this is because in BlockReceiver it skips data bytes already written, 
 but not skips checksum bytes already written. And function 
 adjustCrcFilePosition is only used for last non-completed chunk, but
 not for this situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8548) Minicluster throws NPE on shutdown

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590348#comment-14590348
 ] 

Hudson commented on HDFS-8548:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8548. Minicluster throws NPE on shutdown. Contributed by surendra singh 
lilhore. (xyao: rev 6a76250b39f33466bdc8dabab33070c90aa1a389)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Minicluster throws NPE on shutdown
 --

 Key: HDFS-8548
 URL: https://issues.apache.org/jira/browse/HDFS-8548
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mike Drob
Assignee: surendra singh lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8548.patch


 FtAfter running Solr tests, when we attempt to shut down the mini cluster 
 that we use for our unit tests, we get an NPE in the clean up thread. The 
 test still completes normally, but this generates a lot of extra noise.
 {noformat}
[junit4]   2 java.lang.reflect.InvocationTargetException
[junit4]   2  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
 Method)
[junit4]   2  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[junit4]   2  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2  at java.lang.reflect.Method.invoke(Method.java:497)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
[junit4]   2  at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
[junit4]   2  at 
 org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:227)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:212)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:461)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
[junit4]   2  at 
 org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:145)
[junit4]   2  at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:822)
[junit4]   2  at 
 org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1720)
[junit4]   2  at 
 org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1699)
[junit4]   2  at 
 org.apache.solr.cloud.hdfs.HdfsTestUtil.teardownClass(HdfsTestUtil.java:197)
[junit4]   2  at 
 org.apache.solr.core.HdfsDirectoryFactoryTest.teardownClass(HdfsDirectoryFactoryTest.java:67)
[junit4]   2  at 

[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590356#comment-14590356
 ] 

Brahma Reddy Battula commented on HDFS-8615:


[~ajisakaa]  thanks for reporting..Attached the patch kindly review.

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-17 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-6564:
---
Attachment: HDFS-6564-02.patch

 Use slf4j instead of common-logging in hdfs-client
 --

 Key: HDFS-6564
 URL: https://issues.apache.org/jira/browse/HDFS-6564
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Rakesh R
 Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch


 hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7912) Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590369#comment-14590369
 ] 

Hudson commented on HDFS-7912:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8608. Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of 
Block in UnderReplicatedBlocks and PendingReplicationBlocks). Contributed by 
Zhe Zhang. (wang: rev 6e3fcffe291faec40fa9214f4880a35a952836c4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java


 Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and 
 PendingReplicationBlocks
 --

 Key: HDFS-7912
 URL: https://issues.apache.org/jira/browse/HDFS-7912
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: HDFS-7285

 Attachments: HDFS-7912.000.patch


 Now with striped blocks and the design that uses a single BlockInfoStriped 
 object to track all the corresponding blocks, we need to clearly distinguish 
 the type Block and BlockInfo in BlockManager. Specifically, data structures 
 like {{UnderReplicatedBlocks}} and {{PendingReplicationBlocks}} should track 
 BlockInfo instead of Block in order to support striped block recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4660) Block corruption can happen during pipeline recovery

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590364#comment-14590364
 ] 

Hudson commented on HDFS-4660:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-4660. Block corruption can happen during pipeline recovery. Contributed by 
Kihwal Lee. (kihwal: rev c74517c46bf00af408ed866b6577623cdec02de1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Block corruption can happen during pipeline recovery
 

 Key: HDFS-4660
 URL: https://issues.apache.org/jira/browse/HDFS-4660
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Peng Zhang
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 2.7.1

 Attachments: HDFS-4660.patch, HDFS-4660.patch, HDFS-4660.v2.patch


 pipeline DN1  DN2  DN3
 stop DN2
 pipeline added node DN4 located at 2nd position
 DN1  DN4  DN3
 recover RBW
 DN4 after recover rbw
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134144
   getBytesOnDisk() = 134144
   getVisibleLength()= 134144
 end at chunk (134144/512=262)
 DN3 after recover rbw
 2013-04-01 21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
  21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134028 
   getBytesOnDisk() = 134028
   getVisibleLength()= 134028
 client send packet after recover pipeline
 offset=133632  len=1008
 DN4 after flush 
 2013-04-01 21:02:31,779 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1063
 // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is 
 1063.
 DN3 after flush
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005, 
 type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219, 
 lastPacketInBlock=false, offsetInBlock=134640, 
 ackEnqueueNanoTime=8817026136871545)
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Changing 
 meta file offset of block 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005 from 
 1055 to 1051
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1059
 After checking meta on DN4, I found checksum of chunk 262 is duplicated, but 
 data not.
 Later after block was finalized, DN4's scanner detected bad block, and then 
 reported it to NM. NM send a command to delete this block, and replicate this 
 block from other DN in pipeline to satisfy duplication num.
 I think this is because in BlockReceiver it skips data bytes already written, 
 but not skips checksum bytes already written. And function 
 adjustCrcFilePosition is only used for last non-completed chunk, but
 not for this situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8615:

 Target Version/s: 2.8.0
Affects Version/s: 2.4.1
 Hadoop Flags: Reviewed
   Status: Patch Available  (was: Open)

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8591) Remove support for deprecated configuration key dfs.namenode.decommission.nodes.per.interval

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590240#comment-14590240
 ] 

Hudson commented on HDFS-8591:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8591. Remove support for deprecated configuration key 
dfs.namenode.decommission.nodes.per.interval. (wang: rev 
a3990ca41415515b986a41dacefceee1f05622f8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java


 Remove support for deprecated configuration key 
 dfs.namenode.decommission.nodes.per.interval
 

 Key: HDFS-8591
 URL: https://issues.apache.org/jira/browse/HDFS-8591
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hdfs-8591.001.patch


 dfs.namenode.decommission.nodes.per.interval is deprecated in branch-2 and 
 can be removed in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590238#comment-14590238
 ] 

Hudson commented on HDFS-6581:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-7164. Feature documentation for HDFS-6581. (Contributed by Arpit Agarwal) 
(arp: rev 5dbc8c9cb00da1ba55e1c94c4c1e19d34cf1bd5a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-project/src/site/site.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md


 Write to single replica in memory
 -

 Key: HDFS-6581
 URL: https://issues.apache.org/jira/browse/HDFS-6581
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.0

 Attachments: HDFS-6581.merge.01.patch, HDFS-6581.merge.02.patch, 
 HDFS-6581.merge.03.patch, HDFS-6581.merge.04.patch, HDFS-6581.merge.05.patch, 
 HDFS-6581.merge.06.patch, HDFS-6581.merge.07.patch, HDFS-6581.merge.08.patch, 
 HDFS-6581.merge.09.patch, HDFS-6581.merge.10.patch, HDFS-6581.merge.11.patch, 
 HDFS-6581.merge.12.patch, HDFS-6581.merge.14.patch, HDFS-6581.merge.15.patch, 
 HDFSWriteableReplicasInMemory.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf


 Per discussion with the community on HDFS-5851, we will implement writing to 
 a single replica in DN memory via DataTransferProtocol.
 This avoids some of the issues with short-circuit writes, which we can 
 revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7164) Feature documentation for HDFS-6581

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590235#comment-14590235
 ] 

Hudson commented on HDFS-7164:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-7164. Feature documentation for HDFS-6581. (Contributed by Arpit Agarwal) 
(arp: rev 5dbc8c9cb00da1ba55e1c94c4c1e19d34cf1bd5a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-project/src/site/site.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md


 Feature documentation for HDFS-6581
 ---

 Key: HDFS-7164
 URL: https://issues.apache.org/jira/browse/HDFS-7164
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.7.0, HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.8.0

 Attachments: HDFS-7164.01.patch, HDFS-7164.02.patch, 
 HDFS-7164.03.patch, HDFS-7164.04.patch, HDFS-7164.05.patch, 
 HDFS-7164.06.patch, HDFS-7164.07.patch, LazyPersistWrites.png, site.tar.bz2


 Add feature documentation explaining use cases, how to configure RAM_DISK and 
 API updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8551) Fix hdfs datanode CLI usage message

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590232#comment-14590232
 ] 

Hudson commented on HDFS-8551:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8551. Fix hdfs datanode CLI usage message. Contributed by Brahma Reddy 
Battula. (xyao: rev 9cd5ad9d84e46295249877ade50cd49c34b9bf12)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix hdfs datanode CLI usage message
 ---

 Key: HDFS-8551
 URL: https://issues.apache.org/jira/browse/HDFS-8551
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8551-002.patch, HDFS-8551-003.patch, HDFS-8551.patch


 There are two issues with current hdfs datanode usage message below.
 {code}
 Usage: java DataNode [-regular | -rollback]
 -regular : Normal DataNode startup (default).
 -rollback: Rollback a standard or rolling upgrade.
   Refer to HDFS documentation for the difference between standard
   and rolling upgrades.
 {code}
 1. java DataNode should be hdfs datanode
 2. rollingupgrace option is missing but it is document correctly in the 
 [link|http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#datanode].
 {code}
 Usage: hdfs datanode [-regular | -rollback | -rollingupgrace rollback]
 COMMAND_OPTIONDescription
 -regular  Normal datanode startup (default).
 -rollback Rollback the datanode to the previous version. This should be 
 used after stopping the datanode and distributing the old hadoop version.
 -rollingupgrade rollback  Rollback a rolling upgrade operation.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4660) Block corruption can happen during pipeline recovery

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590228#comment-14590228
 ] 

Hudson commented on HDFS-4660:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-4660. Block corruption can happen during pipeline recovery. Contributed by 
Kihwal Lee. (kihwal: rev c74517c46bf00af408ed866b6577623cdec02de1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Block corruption can happen during pipeline recovery
 

 Key: HDFS-4660
 URL: https://issues.apache.org/jira/browse/HDFS-4660
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Peng Zhang
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 2.7.1

 Attachments: HDFS-4660.patch, HDFS-4660.patch, HDFS-4660.v2.patch


 pipeline DN1  DN2  DN3
 stop DN2
 pipeline added node DN4 located at 2nd position
 DN1  DN4  DN3
 recover RBW
 DN4 after recover rbw
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134144
   getBytesOnDisk() = 134144
   getVisibleLength()= 134144
 end at chunk (134144/512=262)
 DN3 after recover rbw
 2013-04-01 21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
  21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134028 
   getBytesOnDisk() = 134028
   getVisibleLength()= 134028
 client send packet after recover pipeline
 offset=133632  len=1008
 DN4 after flush 
 2013-04-01 21:02:31,779 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1063
 // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is 
 1063.
 DN3 after flush
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005, 
 type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219, 
 lastPacketInBlock=false, offsetInBlock=134640, 
 ackEnqueueNanoTime=8817026136871545)
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Changing 
 meta file offset of block 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005 from 
 1055 to 1051
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1059
 After checking meta on DN4, I found checksum of chunk 262 is duplicated, but 
 data not.
 Later after block was finalized, DN4's scanner detected bad block, and then 
 reported it to NM. NM send a command to delete this block, and replicate this 
 block from other DN in pipeline to satisfy duplication num.
 I think this is because in BlockReceiver it skips data bytes already written, 
 but not skips checksum bytes already written. And function 
 adjustCrcFilePosition is only used for last non-completed chunk, but
 not for this situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8606) Cleanup DFSOutputStream by removing unwanted changes

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590231#comment-14590231
 ] 

Hudson commented on HDFS-8606:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8606. Cleanup DFSOutputStream by removing unwanted changes from HDFS-8386. 
Contributed by Rakesh R (szetszwo: rev d4929f448f95815af99100780a08b172e0262c17)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Cleanup DFSOutputStream by removing unwanted changes
 

 Key: HDFS-8606
 URL: https://issues.apache.org/jira/browse/HDFS-8606
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8606-00.patch


 This jira is to clean up few changes done as part of HDFS-8386. As per 
 [~szetszwo] comments, it will affect the write performance. Please see the 
 discussion 
 [here|https://issues.apache.org/jira/browse/HDFS-8386?focusedCommentId=14575386page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14575386]
 Needs to do the following changes as part of this jira:
 #  remove “synchronized from getStreamer() since it may unnecessarily block 
 the caller
 # remove setStreamer(..) which is currently not used. We may add it in the 
 HDFS-7285 branch and see how to do synchronization correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8597) Fix TestFSImage#testZeroBlockSize on Windows

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590241#comment-14590241
 ] 

Hudson commented on HDFS-8597:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8597. Fix TestFSImage#testZeroBlockSize on Windows. Contributed by Xiaoyu 
Yao. (xyao: rev 4e88ff5b27cc33d311ab7a7248c3cf6303997ddd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java


 Fix TestFSImage#testZeroBlockSize on Windows
 

 Key: HDFS-8597
 URL: https://issues.apache.org/jira/browse/HDFS-8597
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, test
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 2.7.1

 Attachments: HDFS-8597.00.patch, HDFS-8597.01.patch


 The last portion of the dfs.datanode.data.dir is incorrectly formatted.
 {code}2015-06-14 09:44:37,133 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:startDataNodes(1413)) - Starting DataNode 0 with 
 dfs.datanode.data.dir: 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
 2015-06-14 09:44:37,141 ERROR common.Util (Util.java:stringAsURI(50)) - 
 Syntax error in URI 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data.
  Please check hdfs configuration.
 java.net.URISyntaxException: Illegal character in authority at index 7: 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590230#comment-14590230
 ] 

Hudson commented on HDFS-8608:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8608. Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of 
Block in UnderReplicatedBlocks and PendingReplicationBlocks). Contributed by 
Zhe Zhang. (wang: rev 6e3fcffe291faec40fa9214f4880a35a952836c4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java


 Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
 UnderReplicatedBlocks and PendingReplicationBlocks)
 --

 Key: HDFS-8608
 URL: https://issues.apache.org/jira/browse/HDFS-8608
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 3.0.0

 Attachments: HDFS-8608.00.patch, HDFS-8608.01.patch, 
 HDFS-8608.02.patch


 This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
 merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8386) Improve synchronization of 'streamer' reference in DFSOutputStream

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590229#comment-14590229
 ] 

Hudson commented on HDFS-8386:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8606. Cleanup DFSOutputStream by removing unwanted changes from HDFS-8386. 
Contributed by Rakesh R (szetszwo: rev d4929f448f95815af99100780a08b172e0262c17)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve synchronization of 'streamer' reference in DFSOutputStream
 --

 Key: HDFS-8386
 URL: https://issues.apache.org/jira/browse/HDFS-8386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HDFS-8386-00.patch, HDFS-8386-01.patch


 Presently {{DFSOutputStream#streamer}} object reference is accessed 
 inconsistently with respect to synchronization. It would be good to improve 
 this part. This has been noticed when implementing the erasure coding feature.
 Please refer the related [discussion 
 thread|https://issues.apache.org/jira/browse/HDFS-8294?focusedCommentId=14541411page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14541411]
  in the jira HDFS-8294 for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7912) Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590233#comment-14590233
 ] 

Hudson commented on HDFS-7912:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8608. Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of 
Block in UnderReplicatedBlocks and PendingReplicationBlocks). Contributed by 
Zhe Zhang. (wang: rev 6e3fcffe291faec40fa9214f4880a35a952836c4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


 Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and 
 PendingReplicationBlocks
 --

 Key: HDFS-7912
 URL: https://issues.apache.org/jira/browse/HDFS-7912
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: HDFS-7285

 Attachments: HDFS-7912.000.patch


 Now with striped blocks and the design that uses a single BlockInfoStriped 
 object to track all the corresponding blocks, we need to clearly distinguish 
 the type Block and BlockInfo in BlockManager. Specifically, data structures 
 like {{UnderReplicatedBlocks}} and {{PendingReplicationBlocks}} should track 
 BlockInfo instead of Block in order to support striped block recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590366#comment-14590366
 ] 

Hudson commented on HDFS-8608:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8608. Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of 
Block in UnderReplicatedBlocks and PendingReplicationBlocks). Contributed by 
Zhe Zhang. (wang: rev 6e3fcffe291faec40fa9214f4880a35a952836c4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java


 Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
 UnderReplicatedBlocks and PendingReplicationBlocks)
 --

 Key: HDFS-8608
 URL: https://issues.apache.org/jira/browse/HDFS-8608
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 3.0.0

 Attachments: HDFS-8608.00.patch, HDFS-8608.01.patch, 
 HDFS-8608.02.patch


 This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
 merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8606) Cleanup DFSOutputStream by removing unwanted changes

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590367#comment-14590367
 ] 

Hudson commented on HDFS-8606:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8606. Cleanup DFSOutputStream by removing unwanted changes from HDFS-8386. 
Contributed by Rakesh R (szetszwo: rev d4929f448f95815af99100780a08b172e0262c17)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Cleanup DFSOutputStream by removing unwanted changes
 

 Key: HDFS-8606
 URL: https://issues.apache.org/jira/browse/HDFS-8606
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8606-00.patch


 This jira is to clean up few changes done as part of HDFS-8386. As per 
 [~szetszwo] comments, it will affect the write performance. Please see the 
 discussion 
 [here|https://issues.apache.org/jira/browse/HDFS-8386?focusedCommentId=14575386page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14575386]
 Needs to do the following changes as part of this jira:
 #  remove “synchronized from getStreamer() since it may unnecessarily block 
 the caller
 # remove setStreamer(..) which is currently not used. We may add it in the 
 HDFS-7285 branch and see how to do synchronization correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8597) Fix TestFSImage#testZeroBlockSize on Windows

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590377#comment-14590377
 ] 

Hudson commented on HDFS-8597:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8597. Fix TestFSImage#testZeroBlockSize on Windows. Contributed by Xiaoyu 
Yao. (xyao: rev 4e88ff5b27cc33d311ab7a7248c3cf6303997ddd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java


 Fix TestFSImage#testZeroBlockSize on Windows
 

 Key: HDFS-8597
 URL: https://issues.apache.org/jira/browse/HDFS-8597
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, test
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 2.7.1

 Attachments: HDFS-8597.00.patch, HDFS-8597.01.patch


 The last portion of the dfs.datanode.data.dir is incorrectly formatted.
 {code}2015-06-14 09:44:37,133 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:startDataNodes(1413)) - Starting DataNode 0 with 
 dfs.datanode.data.dir: 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
 2015-06-14 09:44:37,141 ERROR common.Util (Util.java:stringAsURI(50)) - 
 Syntax error in URI 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data.
  Please check hdfs configuration.
 java.net.URISyntaxException: Illegal character in authority at index 7: 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8548) Minicluster throws NPE on shutdown

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590378#comment-14590378
 ] 

Hudson commented on HDFS-8548:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8548. Minicluster throws NPE on shutdown. Contributed by surendra singh 
lilhore. (xyao: rev 6a76250b39f33466bdc8dabab33070c90aa1a389)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java


 Minicluster throws NPE on shutdown
 --

 Key: HDFS-8548
 URL: https://issues.apache.org/jira/browse/HDFS-8548
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mike Drob
Assignee: surendra singh lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8548.patch


 FtAfter running Solr tests, when we attempt to shut down the mini cluster 
 that we use for our unit tests, we get an NPE in the clean up thread. The 
 test still completes normally, but this generates a lot of extra noise.
 {noformat}
[junit4]   2 java.lang.reflect.InvocationTargetException
[junit4]   2  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
 Method)
[junit4]   2  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[junit4]   2  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2  at java.lang.reflect.Method.invoke(Method.java:497)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
[junit4]   2  at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
[junit4]   2  at 
 org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:227)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:212)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:461)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
[junit4]   2  at 
 org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:145)
[junit4]   2  at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:822)
[junit4]   2  at 
 org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1720)
[junit4]   2  at 
 org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1699)
[junit4]   2  at 
 org.apache.solr.cloud.hdfs.HdfsTestUtil.teardownClass(HdfsTestUtil.java:197)
[junit4]   2  at 
 org.apache.solr.core.HdfsDirectoryFactoryTest.teardownClass(HdfsDirectoryFactoryTest.java:67)
[junit4]   2  at 

[jira] [Commented] (HDFS-7164) Feature documentation for HDFS-6581

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590371#comment-14590371
 ] 

Hudson commented on HDFS-7164:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-7164. Feature documentation for HDFS-6581. (Contributed by Arpit Agarwal) 
(arp: rev 5dbc8c9cb00da1ba55e1c94c4c1e19d34cf1bd5a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md
* hadoop-project/src/site/site.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md


 Feature documentation for HDFS-6581
 ---

 Key: HDFS-7164
 URL: https://issues.apache.org/jira/browse/HDFS-7164
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.7.0, HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.8.0

 Attachments: HDFS-7164.01.patch, HDFS-7164.02.patch, 
 HDFS-7164.03.patch, HDFS-7164.04.patch, HDFS-7164.05.patch, 
 HDFS-7164.06.patch, HDFS-7164.07.patch, LazyPersistWrites.png, site.tar.bz2


 Add feature documentation explaining use cases, how to configure RAM_DISK and 
 API updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8551) Fix hdfs datanode CLI usage message

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590368#comment-14590368
 ] 

Hudson commented on HDFS-8551:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8551. Fix hdfs datanode CLI usage message. Contributed by Brahma Reddy 
Battula. (xyao: rev 9cd5ad9d84e46295249877ade50cd49c34b9bf12)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix hdfs datanode CLI usage message
 ---

 Key: HDFS-8551
 URL: https://issues.apache.org/jira/browse/HDFS-8551
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8551-002.patch, HDFS-8551-003.patch, HDFS-8551.patch


 There are two issues with current hdfs datanode usage message below.
 {code}
 Usage: java DataNode [-regular | -rollback]
 -regular : Normal DataNode startup (default).
 -rollback: Rollback a standard or rolling upgrade.
   Refer to HDFS documentation for the difference between standard
   and rolling upgrades.
 {code}
 1. java DataNode should be hdfs datanode
 2. rollingupgrace option is missing but it is document correctly in the 
 [link|http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#datanode].
 {code}
 Usage: hdfs datanode [-regular | -rollback | -rollingupgrace rollback]
 COMMAND_OPTIONDescription
 -regular  Normal datanode startup (default).
 -rollback Rollback the datanode to the previous version. This should be 
 used after stopping the datanode and distributing the old hadoop version.
 -rollingupgrade rollback  Rollback a rolling upgrade operation.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590374#comment-14590374
 ] 

Hudson commented on HDFS-6581:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-7164. Feature documentation for HDFS-6581. (Contributed by Arpit Agarwal) 
(arp: rev 5dbc8c9cb00da1ba55e1c94c4c1e19d34cf1bd5a)
* hadoop-project/src/site/site.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md


 Write to single replica in memory
 -

 Key: HDFS-6581
 URL: https://issues.apache.org/jira/browse/HDFS-6581
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.0

 Attachments: HDFS-6581.merge.01.patch, HDFS-6581.merge.02.patch, 
 HDFS-6581.merge.03.patch, HDFS-6581.merge.04.patch, HDFS-6581.merge.05.patch, 
 HDFS-6581.merge.06.patch, HDFS-6581.merge.07.patch, HDFS-6581.merge.08.patch, 
 HDFS-6581.merge.09.patch, HDFS-6581.merge.10.patch, HDFS-6581.merge.11.patch, 
 HDFS-6581.merge.12.patch, HDFS-6581.merge.14.patch, HDFS-6581.merge.15.patch, 
 HDFSWriteableReplicasInMemory.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf


 Per discussion with the community on HDFS-5851, we will implement writing to 
 a single replica in DN memory via DataTransferProtocol.
 This avoids some of the issues with short-circuit writes, which we can 
 revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-17 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590362#comment-14590362
 ] 

Rakesh R commented on HDFS-6564:


Attached patch addressing [~busbey] comments.

 Use slf4j instead of common-logging in hdfs-client
 --

 Key: HDFS-6564
 URL: https://issues.apache.org/jira/browse/HDFS-6564
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Rakesh R
 Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch


 hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8591) Remove support for deprecated configuration key dfs.namenode.decommission.nodes.per.interval

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590376#comment-14590376
 ] 

Hudson commented on HDFS-8591:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8591. Remove support for deprecated configuration key 
dfs.namenode.decommission.nodes.per.interval. (wang: rev 
a3990ca41415515b986a41dacefceee1f05622f8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java


 Remove support for deprecated configuration key 
 dfs.namenode.decommission.nodes.per.interval
 

 Key: HDFS-8591
 URL: https://issues.apache.org/jira/browse/HDFS-8591
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hdfs-8591.001.patch


 dfs.namenode.decommission.nodes.per.interval is deprecated in branch-2 and 
 can be removed in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590344#comment-14590344
 ] 

Hudson commented on HDFS-6581:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-7164. Feature documentation for HDFS-6581. (Contributed by Arpit Agarwal) 
(arp: rev 5dbc8c9cb00da1ba55e1c94c4c1e19d34cf1bd5a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-project/src/site/site.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md


 Write to single replica in memory
 -

 Key: HDFS-6581
 URL: https://issues.apache.org/jira/browse/HDFS-6581
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.0

 Attachments: HDFS-6581.merge.01.patch, HDFS-6581.merge.02.patch, 
 HDFS-6581.merge.03.patch, HDFS-6581.merge.04.patch, HDFS-6581.merge.05.patch, 
 HDFS-6581.merge.06.patch, HDFS-6581.merge.07.patch, HDFS-6581.merge.08.patch, 
 HDFS-6581.merge.09.patch, HDFS-6581.merge.10.patch, HDFS-6581.merge.11.patch, 
 HDFS-6581.merge.12.patch, HDFS-6581.merge.14.patch, HDFS-6581.merge.15.patch, 
 HDFSWriteableReplicasInMemory.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf


 Per discussion with the community on HDFS-5851, we will implement writing to 
 a single replica in DN memory via DataTransferProtocol.
 This avoids some of the issues with short-circuit writes, which we can 
 revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590336#comment-14590336
 ] 

Hudson commented on HDFS-8608:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8608. Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of 
Block in UnderReplicatedBlocks and PendingReplicationBlocks). Contributed by 
Zhe Zhang. (wang: rev 6e3fcffe291faec40fa9214f4880a35a952836c4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
 UnderReplicatedBlocks and PendingReplicationBlocks)
 --

 Key: HDFS-8608
 URL: https://issues.apache.org/jira/browse/HDFS-8608
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 3.0.0

 Attachments: HDFS-8608.00.patch, HDFS-8608.01.patch, 
 HDFS-8608.02.patch


 This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
 merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8386) Improve synchronization of 'streamer' reference in DFSOutputStream

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590335#comment-14590335
 ] 

Hudson commented on HDFS-8386:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8606. Cleanup DFSOutputStream by removing unwanted changes from HDFS-8386. 
Contributed by Rakesh R (szetszwo: rev d4929f448f95815af99100780a08b172e0262c17)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


 Improve synchronization of 'streamer' reference in DFSOutputStream
 --

 Key: HDFS-8386
 URL: https://issues.apache.org/jira/browse/HDFS-8386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HDFS-8386-00.patch, HDFS-8386-01.patch


 Presently {{DFSOutputStream#streamer}} object reference is accessed 
 inconsistently with respect to synchronization. It would be good to improve 
 this part. This has been noticed when implementing the erasure coding feature.
 Please refer the related [discussion 
 thread|https://issues.apache.org/jira/browse/HDFS-8294?focusedCommentId=14541411page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14541411]
  in the jira HDFS-8294 for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8597) Fix TestFSImage#testZeroBlockSize on Windows

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590347#comment-14590347
 ] 

Hudson commented on HDFS-8597:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8597. Fix TestFSImage#testZeroBlockSize on Windows. Contributed by Xiaoyu 
Yao. (xyao: rev 4e88ff5b27cc33d311ab7a7248c3cf6303997ddd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java


 Fix TestFSImage#testZeroBlockSize on Windows
 

 Key: HDFS-8597
 URL: https://issues.apache.org/jira/browse/HDFS-8597
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, test
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 2.7.1

 Attachments: HDFS-8597.00.patch, HDFS-8597.01.patch


 The last portion of the dfs.datanode.data.dir is incorrectly formatted.
 {code}2015-06-14 09:44:37,133 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:startDataNodes(1413)) - Starting DataNode 0 with 
 dfs.datanode.data.dir: 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
 2015-06-14 09:44:37,141 ERROR common.Util (Util.java:stringAsURI(50)) - 
 Syntax error in URI 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data.
  Please check hdfs configuration.
 java.net.URISyntaxException: Illegal character in authority at index 7: 
 file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7912) Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590339#comment-14590339
 ] 

Hudson commented on HDFS-7912:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-8608. Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of 
Block in UnderReplicatedBlocks and PendingReplicationBlocks). Contributed by 
Zhe Zhang. (wang: rev 6e3fcffe291faec40fa9214f4880a35a952836c4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and 
 PendingReplicationBlocks
 --

 Key: HDFS-7912
 URL: https://issues.apache.org/jira/browse/HDFS-7912
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: HDFS-7285

 Attachments: HDFS-7912.000.patch


 Now with striped blocks and the design that uses a single BlockInfoStriped 
 object to track all the corresponding blocks, we need to clearly distinguish 
 the type Block and BlockInfo in BlockManager. Specifically, data structures 
 like {{UnderReplicatedBlocks}} and {{PendingReplicationBlocks}} should track 
 BlockInfo instead of Block in order to support striped block recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7164) Feature documentation for HDFS-6581

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590341#comment-14590341
 ] 

Hudson commented on HDFS-7164:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2177/])
HDFS-7164. Feature documentation for HDFS-6581. (Contributed by Arpit Agarwal) 
(arp: rev 5dbc8c9cb00da1ba55e1c94c4c1e19d34cf1bd5a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-project/src/site/site.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md


 Feature documentation for HDFS-6581
 ---

 Key: HDFS-7164
 URL: https://issues.apache.org/jira/browse/HDFS-7164
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.7.0, HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.8.0

 Attachments: HDFS-7164.01.patch, HDFS-7164.02.patch, 
 HDFS-7164.03.patch, HDFS-7164.04.patch, HDFS-7164.05.patch, 
 HDFS-7164.06.patch, HDFS-7164.07.patch, LazyPersistWrites.png, site.tar.bz2


 Add feature documentation explaining use cases, how to configure RAM_DISK and 
 API updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8386) Improve synchronization of 'streamer' reference in DFSOutputStream

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590365#comment-14590365
 ] 

Hudson commented on HDFS-8386:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2159/])
HDFS-8606. Cleanup DFSOutputStream by removing unwanted changes from HDFS-8386. 
Contributed by Rakesh R (szetszwo: rev d4929f448f95815af99100780a08b172e0262c17)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve synchronization of 'streamer' reference in DFSOutputStream
 --

 Key: HDFS-8386
 URL: https://issues.apache.org/jira/browse/HDFS-8386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HDFS-8386-00.patch, HDFS-8386-01.patch


 Presently {{DFSOutputStream#streamer}} object reference is accessed 
 inconsistently with respect to synchronization. It would be good to improve 
 this part. This has been noticed when implementing the erasure coding feature.
 Please refer the related [discussion 
 thread|https://issues.apache.org/jira/browse/HDFS-8294?focusedCommentId=14541411page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14541411]
  in the jira HDFS-8294 for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8580) Erasure coding: Persist cellSize in BlockInfoStriped and StripedBlocksFeature

2015-06-17 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8580:

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

The 04 patch looks good to me. +1. 

I've committed this to the feature branch. Thanks for the contribution, 
[~walter.k.su]!

 Erasure coding: Persist cellSize in BlockInfoStriped and StripedBlocksFeature
 -

 Key: HDFS-8580
 URL: https://issues.apache.org/jira/browse/HDFS-8580
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Fix For: HDFS-7285

 Attachments: HDFS-8580-HDFS-7285.01.patch, 
 HDFS-8580-HDFS-7285.02.patch, HDFS-8580-HDFS-7285.03.patch, 
 HDFS-8580-HDFS-7285.04.patch, HDFS-8580.00.patch


 Zhe Zhang, Kai Zheng and I had a offline discussion. Here is what we thought: 
  Add a cellSize field in BlockInfoStriped as a workaround, and deal with 
 memory usage in follow-on.(HDFS-8059)
 discussion in HDFS-8494:
 from Walter Su:
 {quote}
 I think BlockInfoStriped needs to keep cellSize.
 {quote}
 from [~vinayrpet]:
 {quote}
 I too was thinking the same when the FSImageLoader problem has came up. This 
 will increase the memory usage by ~4bytes for each block though.
 {quote}
 from [~jingzhao]
 {quote}
 -Also, we should consider adding a chunk size field to StripedBlockProto and 
 removing the cell size field from HdfsFileStatus. In this way we can access 
 the chunk size information in the storage layer.-
 {quote}
 ==
 update:
 from [~jingzhao]
 {quote}
 For fsimage part, since HDFS-8585 just removes StripedBlockProto, I guess 
 what we can do here is to either 1) add the cellSize information into 
 StripedBlocksFeature in fsimage.proto, or 2) bring StripedBlockProto back and 
 put block info and cell size there.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590456#comment-14590456
 ] 

Akira AJISAKA commented on HDFS-8615:
-

+1 pending Jenkins.

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7546) Document, and set an accepting default for dfs.namenode.kerberos.principal.pattern

2015-06-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590212#comment-14590212
 ] 

Yongjun Zhang commented on HDFS-7546:
-

Hi [~aw],

You committed this fix to trunk only, did you mean to say that the fix is 
incompatible change?  Thanks.


 Document, and set an accepting default for 
 dfs.namenode.kerberos.principal.pattern
 --

 Key: HDFS-7546
 URL: https://issues.apache.org/jira/browse/HDFS-7546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.1.1-beta
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: supportability
 Fix For: 3.0.0

 Attachments: HDFS-7546.patch


 This config is used in the SaslRpcClient, and the no-default breaks 
 cross-realm trust principals being used at clients.
 Current location: 
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L309
 The config should be documented and the default should be set to * to 
 preserve the prior-to-introduction behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-8615:
--

Assignee: Brahma Reddy Battula  (was: Jagadesh Kiran N)

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie

 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8615:
---
Attachment: HDFS-8615.patch

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-17 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590511#comment-14590511
 ] 

Sean Busbey commented on HDFS-6564:
---

the patch look good. have a draft of the needed release note?

 Use slf4j instead of common-logging in hdfs-client
 --

 Key: HDFS-6564
 URL: https://issues.apache.org/jira/browse/HDFS-6564
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Rakesh R
 Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch


 hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8617) Throttle DiskChecker#checkDirs() speed.

2015-06-17 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8617:

Attachment: HDFS-8617.000.patch

This patch does 2 things:

* Limits checkDirs() calls to 50 calls per second
* There is not another checkDirs() running in 30 minutes after the previous 
{{checkDirs()}} finished.

 Throttle DiskChecker#checkDirs() speed.
 ---

 Key: HDFS-8617
 URL: https://issues.apache.org/jira/browse/HDFS-8617
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-8617.000.patch


 As described in HDFS-8564,  {{DiskChecker.checkDirs(finalizedDir)}} is 
 causing excessive I/Os because {{finalizedDirs}} might have up to 64K 
 sub-directories (HDFS-6482).
 This patch proposes to limit the rate of IO operations in 
 {{DiskChecker.checkDirs()}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8617) Throttle DiskChecker#checkDirs() speed.

2015-06-17 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8617:

Status: Patch Available  (was: Open)

 Throttle DiskChecker#checkDirs() speed.
 ---

 Key: HDFS-8617
 URL: https://issues.apache.org/jira/browse/HDFS-8617
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-8617.000.patch


 As described in HDFS-8564,  {{DiskChecker.checkDirs(finalizedDir)}} is 
 causing excessive I/Os because {{finalizedDirs}} might have up to 64K 
 sub-directories (HDFS-6482).
 This patch proposes to limit the rate of IO operations in 
 {{DiskChecker.checkDirs()}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7192) DN should ignore lazyPersist hint if the writer is not local

2015-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590758#comment-14590758
 ] 

Hadoop QA commented on HDFS-7192:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 45s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 17s | The applied patch generated  4 
new checkstyle issues (total was 576, now 577). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 17s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 14s | Tests failed in hadoop-hdfs. |
| | | 206m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740029/HDFS-7192.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6e3fcff |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11392/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11392/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11392/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11392/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11392/console |


This message was automatically generated.

 DN should ignore lazyPersist hint if the writer is not local
 

 Key: HDFS-7192
 URL: https://issues.apache.org/jira/browse/HDFS-7192
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-7192.01.patch


 The DN should ignore {{allowLazyPersist}} hint to 
 {{DataTransferProtocol#writeBlock}} if the writer is not local.
 Currently we don't restrict memory writes to local clients. For in-cluster 
 clients this is not an issue as single replica writes default to the local 
 DataNode. But clients outside the cluster can still send this hint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-17 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590825#comment-14590825
 ] 

Haohui Mai commented on HDFS-8238:
--

+1. I'll commit it shortly.

[~tasanuma0829], the checkstyle warnings come from the original file. Can you 
please file a follow-up jira to clean up the warnings? Thanks.

 Move ClientProtocol to the hdfs-client
 --

 Key: HDFS-8238
 URL: https://issues.apache.org/jira/browse/HDFS-8238
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Takanobu Asanuma
 Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
 HDFS-8238.002.patch, HDFS-8238.003.patch


 The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
 client. This jira proposes to move it into the hdfs-client module.
 The jira needs to move:
 * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
 {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
 package
 * Remove the reference of {{DistributedFileSystem}} in the javadoc
 * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
 {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2015-06-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590910#comment-14590910
 ] 

Colin Patrick McCabe edited comment on HDFS-8578 at 6/18/15 12:29 AM:
--

bq. Any concerns about overloading the controller?

In general, the upgrade workload is creating a bunch of hardlinks, often one 
per block file.  This is not a large amount of I/O in terms of bandwidth.  
Normally it completes in a second or two.  The only cases we have seen problems 
are where write caching is turned off on the hard disks, forcing a lot of 
non-sequential I/O to update the inode entries.  I would also argue that it is 
Linux's responsibility to manage sending commands to the disk controller and 
backing off (putting the user mode process to sleep) if there are too many in 
flight.  So I don't see any concerns here about overloading the disk controller.


was (Author: cmccabe):
bq. Any concerns about overloading the controller?

In general, the upgrade workload is creating a bunch of hardlinks, often one 
per block file.  This is not a large amount of I/O in terms of bandwidth.  
Normally it completes in a second or two.  The only cases we have seen problems 
are where write caching is turned off on the hard disks, forcing a lot of 
non-sequential I/O to update the inode entries.  I would also argue that it is 
Linux's responsibility to manage sending commands to the disk controller and 
backing off (putting the user mode process to sleep) if there are too many in 
the pipe.  So I don't see any concerns here about overloading the disk 
controller controller.

 On upgrade, Datanode should process all storage/data dirs in parallel
 -

 Key: HDFS-8578
 URL: https://issues.apache.org/jira/browse/HDFS-8578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Raju Bairishetti
Priority: Critical
 Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch


 Right now, during upgrades datanode is processing all the storage dirs 
 sequentially. Assume it takes ~20 mins to process a single storage dir then  
 datanode which has ~10 disks will take around 3hours to come up.
 *BlockPoolSliceStorage.java*
 {code}
for (int idx = 0; idx  getNumStorageDirs(); idx++) {
   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
   assert getCTime() == nsInfo.getCTime() 
   : Data-node and name-node CTimes must be the same.;
 }
 {code}
 It would save lots of time during major upgrades if datanode process all 
 storagedirs/disks parallelly.
 Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-17 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8446:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks Jing for the reviews.

 Separate safemode related operations in GetBlockLocations()
 ---

 Key: HDFS-8446
 URL: https://issues.apache.org/jira/browse/HDFS-8446
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
 HDFS-8446.002.patch, HDFS-8446.003.patch


 Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
 the NN is in SafeMode. This jira proposes to refactor the code to improve 
 readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8602) Erasure Coding: Client can't read(decode) the EC files which have corrupt blocks.

2015-06-17 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590867#comment-14590867
 ] 

Kai Sasaki commented on HDFS-8602:
--

[~jingzhao] Thanks a lot Jing! I'll try this patch in our cluster and update it.

 Erasure Coding: Client can't read(decode) the EC files which have corrupt 
 blocks.
 -

 Key: HDFS-8602
 URL: https://issues.apache.org/jira/browse/HDFS-8602
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Kai Sasaki
 Fix For: HDFS-7285

 Attachments: HDFS-8602.000.patch


 Before the DataNode(s) reporting bad block(s), when Client reads the EC file 
 which has bad blocks, Client gets hung up. And there are no error messages.
 (When Client reads the replicated file which has bad blocks, the bad blocks 
 are reconstructed at the same time, and Client can reads it.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8589:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

committed, thanks

 Fix unused imports in BPServiceActor and BlockReportLeaseManager
 

 Key: HDFS-8589
 URL: https://issues.apache.org/jira/browse/HDFS-8589
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8589.001.patch


 Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7192) DN should ignore lazyPersist hint if the writer is not local

2015-06-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7192:

Attachment: HDFS-7192.02.patch

Updated patch with unit tests.

Some edits to {{DataXceiver#writeBlock}} to stub out couple of calls for 
testing.

 DN should ignore lazyPersist hint if the writer is not local
 

 Key: HDFS-7192
 URL: https://issues.apache.org/jira/browse/HDFS-7192
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-7192.01.patch, HDFS-7192.02.patch


 The DN should ignore {{allowLazyPersist}} hint to 
 {{DataTransferProtocol#writeBlock}} if the writer is not local.
 Currently we don't restrict memory writes to local clients. For in-cluster 
 clients this is not an issue as single replica writes default to the local 
 DataNode. But clients outside the cluster can still send this hint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2015-06-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590910#comment-14590910
 ] 

Colin Patrick McCabe commented on HDFS-8578:


bq. Any concerns about overloading the controller?

In general, the upgrade workload is creating a bunch of hardlinks, often one 
per block file.  This is not a large amount of I/O in terms of bandwidth.  
Normally it completes in a second or two.  The only cases we have seen problems 
are where write caching is turned off on the hard disks, forcing a lot of 
non-sequential I/O to update the inode entries.  I would also argue that it is 
Linux's responsibility to manage sending commands to the disk controller and 
backing off (putting the user mode process to sleep) if there are too many in 
the pipe.  So I don't see any concerns here about overloading the disk 
controller controller.

 On upgrade, Datanode should process all storage/data dirs in parallel
 -

 Key: HDFS-8578
 URL: https://issues.apache.org/jira/browse/HDFS-8578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Raju Bairishetti
Priority: Critical
 Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch


 Right now, during upgrades datanode is processing all the storage dirs 
 sequentially. Assume it takes ~20 mins to process a single storage dir then  
 datanode which has ~10 disks will take around 3hours to come up.
 *BlockPoolSliceStorage.java*
 {code}
for (int idx = 0; idx  getNumStorageDirs(); idx++) {
   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
   assert getCTime() == nsInfo.getCTime() 
   : Data-node and name-node CTimes must be the same.;
 }
 {code}
 It would save lots of time during major upgrades if datanode process all 
 storagedirs/disks parallelly.
 Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer

2015-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590774#comment-14590774
 ] 

Hadoop QA commented on HDFS-8462:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 49s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 59s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 12s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 14s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 32s | Tests failed in hadoop-hdfs. |
| | | 212m 31s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740137/HDFS-8462-02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 6e3fcff |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11393/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11393/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11393/console |


This message was automatically generated.

 Implement GETXATTRS and LISTXATTRS operation for WebImageViewer
 ---

 Key: HDFS-8462
 URL: https://issues.apache.org/jira/browse/HDFS-8462
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Akira AJISAKA
Assignee: Jagadesh Kiran N
 Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, 
 HDFS-8462-02.patch


 In Hadoop 2.7.0, WebImageViewer supports the following operations:
 * {{GETFILESTATUS}}
 * {{LISTSTATUS}}
 * {{GETACLSTATUS}}
 I'm thinking it would be better for administrators if {{GETXATTRS}} and 
 {{LISTXATTRS}} are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-17 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590855#comment-14590855
 ] 

Haohui Mai commented on HDFS-6564:
--

{code}
-  public static final Log LOG = LogFactory.getLog(CachePoolInfo.class);
+  public static final Logger LOG = LoggerFactory
+  .getLogger(CachePoolInfo.class);
{code}

The LOG variable is never used. And in terms of compatibility this should be 
fine from a practical point of view. There are no changes on the  members that 
actually hold the data.

 Use slf4j instead of common-logging in hdfs-client
 --

 Key: HDFS-6564
 URL: https://issues.apache.org/jira/browse/HDFS-6564
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Rakesh R
 Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch


 hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8619) Erasure Coding: revisit replica counting for striped blocks

2015-06-17 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-8619:
---

 Summary: Erasure Coding: revisit replica counting for striped 
blocks
 Key: HDFS-8619
 URL: https://issues.apache.org/jira/browse/HDFS-8619
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


Currently we use the same {{BlockManager#countNodes}} method for striped 
blocks, which simply treat each internal block as a replica. However, for a 
striped block, we may have more complicated scenario, e.g., we have multiple 
replicas of the first internal block while we miss some other internal blocks. 
Using the current {{countNodes}} methods can lead to wrong decision in these 
scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8618) Cherry-pick HDFS-7546 to branch-2

2015-06-17 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HDFS-8618.
-
Resolution: Invalid

 Cherry-pick HDFS-7546 to branch-2
 -

 Key: HDFS-8618
 URL: https://issues.apache.org/jira/browse/HDFS-8618
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-17 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8238:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~tasanuma0829] for the 
contribution.

 Move ClientProtocol to the hdfs-client
 --

 Key: HDFS-8238
 URL: https://issues.apache.org/jira/browse/HDFS-8238
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
 HDFS-8238.002.patch, HDFS-8238.003.patch


 The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
 client. This jira proposes to move it into the hdfs-client module.
 The jira needs to move:
 * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
 {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
 package
 * Remove the reference of {{DistributedFileSystem}} in the javadoc
 * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
 {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-17 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590853#comment-14590853
 ] 

Aaron T. Myers commented on HDFS-6440:
--

All these changes look good to me, thanks a lot for making them, Jesse. I'll 
fix the {{TestPipelinesFailover}} whitespace issue on commit.

+1 from me. I'm going to commit this tomorrow morning, unless someone speaks up 
in the meantime.

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 3.0.0

 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
 hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
 hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8246) Get HDFS file name based on block pool id and block id

2015-06-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590879#comment-14590879
 ] 

Colin Patrick McCabe commented on HDFS-8246:


I agree that fsck solves the admin wants to know what file includes a block 
use-case.  What are the other use cases?

Also, if this API is available to normal users, how do we deal with this case:

1. snapshot S1 happens: block B is in file F
2. permissions of F get changed so that only superuser can access it
3. non-superuser asks for what files contain B

Should the non-superuser be able to know that F still contains B in step 3?  
Even though he doesn't have permission to access F?  It certainly seems like he 
should know that it contained B in snapshot S1.

 Get HDFS file name based on block pool id and block id
 --

 Key: HDFS-8246
 URL: https://issues.apache.org/jira/browse/HDFS-8246
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: HDFS, hdfs-client, namenode
Reporter: feng xu
Assignee: feng xu
  Labels: BB2015-05-TBR
 Attachments: HDFS-8246.0.patch


 This feature provides HDFS shell command and C/Java API to retrieve HDFS file 
 name based on block pool id and block id.
 1. The Java API in class DistributedFileSystem
 public String getFileName(String poolId, long blockId) throws IOException
 2. The C API in hdfs.c
 char* hdfsGetFileName(hdfsFS fs, const char* poolId, int64_t blockId)
 3. The HDFS shell command 
  hdfs dfs [generic options] -fn poolId blockId
 This feature is useful if you have HDFS block file name in local file system 
 and want to  find out the related HDFS file name in HDFS name space 
 (http://stackoverflow.com/questions/10881449/how-to-find-file-from-blockname-in-hdfs-hadoop).
   Each HDFS block file name in local file system contains both block pool id 
 and block id, for sample HDFS block file name 
 /hdfs/1/hadoop/hdfs/data/current/BP-97622798-10.3.11.84-1428081035160/current/finalized/subdir0/subdir0/blk_1073741825,
   the block pool id is BP-97622798-10.3.11.84-1428081035160 and the block id 
 is 1073741825. The block  pool id is uniquely related to a HDFS name 
 node/name space,  and the block id is uniquely related to a HDFS file within 
 a HDFS name node/name space, so the combination of block pool id and a block 
 id is uniquely related a HDFS file name. 
 The shell command and C/Java API do not map the block pool id to name node, 
 so it’s user’s responsibility to talk to the correct name node in federation 
 environment that has multiple name nodes. The block pool id is used by name 
 node to check if the user is talking with the correct name node.
 The implementation is straightforward. The client request to get HDFS file 
 name reaches the new method String getFileName(String poolId, long blockId) 
 in FSNamesystem in name node through RPC,  and the new method does the 
 followings,
 (1)   Validate the block pool id.
 (2)   Create Block  based on the block id.
 (3)   Get BlockInfoContiguous from Block.
 (4)   Get BlockCollection from BlockInfoContiguous.
 (5)   Get file name from BlockCollection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6192) WebHdfs call for setting quotas

2015-06-17 Thread Romain Rigaux (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Rigaux updated HDFS-6192:

Description: The WebHdfs and HttpFs API calls for setting quotas are 
missing: 
http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
  (was: The WebHdfs and HttpFs API calls for setting quotas are missing: 
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html)

 WebHdfs call for setting quotas
 ---

 Key: HDFS-6192
 URL: https://issues.apache.org/jira/browse/HDFS-6192
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.3.0
Reporter: Romain Rigaux

 The WebHdfs and HttpFs API calls for setting quotas are missing: 
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7192) DN should ignore lazyPersist hint if the writer is not local

2015-06-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7192:

Attachment: HDFS-7192.03.patch

 DN should ignore lazyPersist hint if the writer is not local
 

 Key: HDFS-7192
 URL: https://issues.apache.org/jira/browse/HDFS-7192
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-7192.01.patch, HDFS-7192.02.patch, 
 HDFS-7192.03.patch


 The DN should ignore {{allowLazyPersist}} hint to 
 {{DataTransferProtocol#writeBlock}} if the writer is not local.
 Currently we don't restrict memory writes to local clients. For in-cluster 
 clients this is not an issue as single replica writes default to the local 
 DataNode. But clients outside the cluster can still send this hint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-17 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590907#comment-14590907
 ] 

Takanobu Asanuma commented on HDFS-8238:


OK, I'll create a jira about it. Thank you for your help and commitment!

 Move ClientProtocol to the hdfs-client
 --

 Key: HDFS-8238
 URL: https://issues.apache.org/jira/browse/HDFS-8238
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
 HDFS-8238.002.patch, HDFS-8238.003.patch


 The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
 client. This jira proposes to move it into the hdfs-client module.
 The jira needs to move:
 * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
 {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
 package
 * Remove the reference of {{DistributedFileSystem}} in the javadoc
 * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
 {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590931#comment-14590931
 ] 

Akira AJISAKA commented on HDFS-8615:
-

Sorry, I forgot to see the result of Jenkins build. For testing the patch, I 
built the document and uploaded to 
http://aajisaka.github.io/hadoop-project/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_ACL_Status.
 The document looks fine.

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8618) Cherry-pick HDFS-7546 to branch-2

2015-06-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590763#comment-14590763
 ] 

Yongjun Zhang commented on HDFS-8618:
-

A second thought, I will create an addendum to HDFS-7546 instead of creating 
this new jira. Closing it as invalid.


 Cherry-pick HDFS-7546 to branch-2
 -

 Key: HDFS-8618
 URL: https://issues.apache.org/jira/browse/HDFS-8618
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8480) Fix performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-06-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590891#comment-14590891
 ] 

Colin Patrick McCabe commented on HDFS-8480:


Hi [~zhz],

We have to distinguish between two cases where the edit log is read:
1. when the edit log is read NN during startup to catch up with edits that 
didn't make it into the fsimage yet
2. when the NN reads the edit log to handle an inotify request

Case #1 should be validating that the version is the newest version, since the 
upgrade process ensures this.

Case #2 should not, since otherwise we will fail when reading old edit logs.

 Fix performance and timeout issues in HDFS-7929: use hard-links instead of 
 copying edit logs
 

 Key: HDFS-8480
 URL: https://issues.apache.org/jira/browse/HDFS-8480
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Critical
 Attachments: HDFS-8480.00.patch, HDFS-8480.01.patch, 
 HDFS-8480.02.patch


 HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
 {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
 hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8615:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed this patch to trunk and branch-2. Thanks [~brahmareddy] for the 
contribution.

 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2015-06-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590925#comment-14590925
 ] 

Colin Patrick McCabe commented on HDFS-8578:


bq. Just realized that, parallelism should be added at the datanode level in 
DataStorage#addStorageLocations(). Not in the BlockPoolSliceStorage.

OK, so your idea is that that will allow us to parallelize upgrades between 
different block pools.  Fair enough.

bq. And during startup, If any one of the blockpool storage directories failed 
to load/upgrade also, datanode continues to start with other directories 
available. Only if all directories are failed to load then only it will fail.

I don't think this is true.  The DN will not start up if more than 
{{dfs.datanode.failed.volumes.tolerated}} volumes have failed, as per this code:

{code}
  /**
   * An FSDataset has a directory where it loads its data files.
   */
  FsDatasetImpl(DataNode datanode, DataStorage storage, Configuration conf
  ) throws IOException {
...
if (volsFailed  volFailuresTolerated) {
  throw new DiskErrorException(Too many failed volumes - 
  + current valid volumes:  + storage.getNumStorageDirs()
  + , volumes configured:  + volsConfigured
  + , volumes failed:  + volsFailed
  + , volume failures tolerated:  + volFailuresTolerated);
}
{code}

bq. Updated the patch, please review.

ok

 On upgrade, Datanode should process all storage/data dirs in parallel
 -

 Key: HDFS-8578
 URL: https://issues.apache.org/jira/browse/HDFS-8578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Raju Bairishetti
Priority: Critical
 Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch


 Right now, during upgrades datanode is processing all the storage dirs 
 sequentially. Assume it takes ~20 mins to process a single storage dir then  
 datanode which has ~10 disks will take around 3hours to come up.
 *BlockPoolSliceStorage.java*
 {code}
for (int idx = 0; idx  getNumStorageDirs(); idx++) {
   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
   assert getCTime() == nsInfo.getCTime() 
   : Data-node and name-node CTimes must be the same.;
 }
 {code}
 It would save lots of time during major upgrades if datanode process all 
 storagedirs/disks parallelly.
 Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590920#comment-14590920
 ] 

Hadoop QA commented on HDFS-6564:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 42s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 21s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 16s | Tests passed in 
hadoop-hdfs-client. |
| | |  40m 14s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740186/HDFS-6564-02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 015535d |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11394/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11394/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11394/console |


This message was automatically generated.

 Use slf4j instead of common-logging in hdfs-client
 --

 Key: HDFS-6564
 URL: https://issues.apache.org/jira/browse/HDFS-6564
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Rakesh R
 Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch


 hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-06-17 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-8620:
--

 Summary: Clean up the checkstyle warinings about ClientProtocol
 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8616) Cherry pick HDFS-6495 for excess block leak

2015-06-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590578#comment-14590578
 ] 

Akira AJISAKA commented on HDFS-8616:
-

Is it HDFS-6945?

 Cherry pick HDFS-6495 for excess block leak
 ---

 Key: HDFS-8616
 URL: https://issues.apache.org/jira/browse/HDFS-8616
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp

 Busy clusters quickly leak tens or hundreds of thousands of excess blocks 
 which slow BR processing.  HDFS-6495 should be cherry picked into 2.7.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590705#comment-14590705
 ] 

Andrew Wang commented on HDFS-8608:
---

We should probably reopen HDFS-4336 and put the patch up there. I just tried 
triggering the build job though manually.

 Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
 UnderReplicatedBlocks and PendingReplicationBlocks)
 --

 Key: HDFS-8608
 URL: https://issues.apache.org/jira/browse/HDFS-8608
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 3.0.0

 Attachments: HDFS-4366-branch-2.00.patch, HDFS-8608.00.patch, 
 HDFS-8608.01.patch, HDFS-8608.02.patch


 This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
 merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590705#comment-14590705
 ] 

Andrew Wang edited comment on HDFS-8608 at 6/17/15 10:02 PM:
-

We should probably reopen HDFS-4366 and put the patch up there. I just tried 
triggering the build job though manually.


was (Author: andrew.wang):
We should probably reopen HDFS-4336 and put the patch up there. I just tried 
triggering the build job though manually.

 Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
 UnderReplicatedBlocks and PendingReplicationBlocks)
 --

 Key: HDFS-8608
 URL: https://issues.apache.org/jira/browse/HDFS-8608
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 3.0.0

 Attachments: HDFS-4366-branch-2.00.patch, HDFS-8608.00.patch, 
 HDFS-8608.01.patch, HDFS-8608.02.patch


 This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
 merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8548) Minicluster throws NPE on shutdown

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590242#comment-14590242
 ] 

Hudson commented on HDFS-8548:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #229 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/229/])
HDFS-8548. Minicluster throws NPE on shutdown. Contributed by surendra singh 
lilhore. (xyao: rev 6a76250b39f33466bdc8dabab33070c90aa1a389)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Minicluster throws NPE on shutdown
 --

 Key: HDFS-8548
 URL: https://issues.apache.org/jira/browse/HDFS-8548
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mike Drob
Assignee: surendra singh lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8548.patch


 FtAfter running Solr tests, when we attempt to shut down the mini cluster 
 that we use for our unit tests, we get an NPE in the clean up thread. The 
 test still completes normally, but this generates a lot of extra noise.
 {noformat}
[junit4]   2 java.lang.reflect.InvocationTargetException
[junit4]   2  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
 Method)
[junit4]   2  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[junit4]   2  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2  at java.lang.reflect.Method.invoke(Method.java:497)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
[junit4]   2  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
[junit4]   2  at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
[junit4]   2  at 
 org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:227)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:212)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:461)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
[junit4]   2  at 
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
[junit4]   2  at 
 org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
[junit4]   2  at 
 org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:145)
[junit4]   2  at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:822)
[junit4]   2  at 
 org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1720)
[junit4]   2  at 
 org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1699)
[junit4]   2  at 
 org.apache.solr.cloud.hdfs.HdfsTestUtil.teardownClass(HdfsTestUtil.java:197)
[junit4]   2  at 
 org.apache.solr.core.HdfsDirectoryFactoryTest.teardownClass(HdfsDirectoryFactoryTest.java:67)
[junit4]   2  at 

[jira] [Commented] (HDFS-7546) Document, and set an accepting default for dfs.namenode.kerberos.principal.pattern

2015-06-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590264#comment-14590264
 ] 

Allen Wittenauer commented on HDFS-7546:


I committed this to trunk only because my time is more valuable than branch-2.

 Document, and set an accepting default for 
 dfs.namenode.kerberos.principal.pattern
 --

 Key: HDFS-7546
 URL: https://issues.apache.org/jira/browse/HDFS-7546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.1.1-beta
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: supportability
 Fix For: 3.0.0

 Attachments: HDFS-7546.patch


 This config is used in the SaslRpcClient, and the no-default breaks 
 cross-realm trust principals being used at clients.
 Current location: 
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L309
 The config should be documented and the default should be set to * to 
 preserve the prior-to-introduction behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590479#comment-14590479
 ] 

Hadoop QA commented on HDFS-8515:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 11s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 47s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 52s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  
15 new checkstyle issues (total was 21, now 28). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 18s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 24s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 163m 31s | Tests passed in hadoop-hdfs. 
|
| | | 210m 52s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740108/HDFS-8515-v3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6e3fcff |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11390/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11390/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11390/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11390/console |


This message was automatically generated.

 Abstract a DTP/2 HTTP/2 server
 --

 Key: HDFS-8515
 URL: https://issues.apache.org/jira/browse/HDFS-8515
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Attachments: HDFS-8515-v1.patch, HDFS-8515-v2.patch, 
 HDFS-8515-v3.patch, HDFS-8515.patch


 Discussed in HDFS-8471.
 https://issues.apache.org/jira/browse/HDFS-8471?focusedCommentId=14568196page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14568196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7546) Document, and set an accepting default for dfs.namenode.kerberos.principal.pattern

2015-06-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590743#comment-14590743
 ] 

Yongjun Zhang commented on HDFS-7546:
-

Hi [~aw],

Thanks for the clarification. I created HDFS-8618 to cherry-pick to branch-2. 

What I will do is to modify the corresponding CHANGES.txt to reflect that it 
will be fixed in branch-2 (I target it as 2.7.1), and cherry pick both 
HDFS-7546 and HDFS-8618 to branch-2 and branch-2.7. 
 
Thanks.



 Document, and set an accepting default for 
 dfs.namenode.kerberos.principal.pattern
 --

 Key: HDFS-7546
 URL: https://issues.apache.org/jira/browse/HDFS-7546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.1.1-beta
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: supportability
 Fix For: 3.0.0

 Attachments: HDFS-7546.patch


 This config is used in the SaslRpcClient, and the no-default breaks 
 cross-realm trust principals being used at clients.
 Current location: 
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L309
 The config should be documented and the default should be set to * to 
 preserve the prior-to-introduction behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8602) Erasure Coding: Client can't read(decode) the EC files which have corrupt blocks.

2015-06-17 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8602:

Attachment: HDFS-8602.000.patch

Thanks very much for reporting the issue and working on this, [~kaisasak]!

I also did some debugging on the issue. Looks like the cause is a deadlock: 
after hitting the exception while reading the corrupted block, {{readToBuffer}} 
tries to print out some warning msg during which {{getCurrentBlock}} is called. 
{{getCurrentBlock}} needs to acquire the inputstream's lock, which is currently 
held by the main thread, and the main thread is waiting for the response from 
the reading threads.

The patch includes a simple fix and also a unit test that can reproduce the 
issue ({{testReadCorruptedData2}}).

 Erasure Coding: Client can't read(decode) the EC files which have corrupt 
 blocks.
 -

 Key: HDFS-8602
 URL: https://issues.apache.org/jira/browse/HDFS-8602
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Kai Sasaki
 Fix For: HDFS-7285

 Attachments: HDFS-8602.000.patch


 Before the DataNode(s) reporting bad block(s), when Client reads the EC file 
 which has bad blocks, Client gets hung up. And there are no error messages.
 (When Client reads the replicated file which has bad blocks, the bad blocks 
 are reconstructed at the same time, and Client can reads it.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8618) Cherry-pick HDFS-7546 to branch-2

2015-06-17 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-8618:
---

 Summary: Cherry-pick HDFS-7546 to branch-2
 Key: HDFS-8618
 URL: https://issues.apache.org/jira/browse/HDFS-8618
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8618) Cherry-pick HDFS-7546 to branch-2

2015-06-17 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-8618:

Target Version/s: 2.7.1

 Cherry-pick HDFS-7546 to branch-2
 -

 Key: HDFS-8618
 URL: https://issues.apache.org/jira/browse/HDFS-8618
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6249:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed this patch to trunk and branch-2. Thanks [~surendrasingh] for 
the contribution!

 Output AclEntry in PBImageXmlWriter
 ---

 Key: HDFS-6249
 URL: https://issues.apache.org/jira/browse/HDFS-6249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: surendra singh lilhore
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-6249.patch, HDFS-6249_1.patch


 It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5863) Improve OfflineImageViewer

2015-06-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-5863.
-
Resolution: Fixed

 Improve OfflineImageViewer
 --

 Key: HDFS-5863
 URL: https://issues.apache.org/jira/browse/HDFS-5863
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Akira AJISAKA

 This is an umbrella jira for improving Offline Image Viewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewe

2015-06-17 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-8622:
--

 Summary: Implement GETCONTENTSUMMARY operation for WebImageViewe
 Key: HDFS-8622
 URL: https://issues.apache.org/jira/browse/HDFS-8622
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jagadesh Kiran N
Assignee: Jagadesh Kiran N


 it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2015-06-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6310:

Target Version/s: 2.8.0  (was: 2.6.0)

 PBImageXmlWriter should output information about Delegation Tokens
 --

 Key: HDFS-6310
 URL: https://issues.apache.org/jira/browse/HDFS-6310
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: BB2015-05-TBR
 Attachments: HDFS-6310.patch


 Separated from HDFS-6293.
 The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
 option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-17 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-8598:
-
Status: Patch Available  (was: Open)

 Add and optimize for get LocatedFileStatus  in DFSClient
 

 Key: HDFS-8598
 URL: https://issues.apache.org/jira/browse/HDFS-8598
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yong Zhang
Assignee: Yong Zhang
 Attachments: HDFS-8598.001.patch


 If we want to get all files block locations in one directory, we have to call 
 getFileBlockLocations for each file, it will take long time because of too 
 many request. 
 LocatedFileStatus has block location, but we can find it also call 
 getFileBlockLocations  for each file in DFSClient. this jira is trying to 
 optimize with only one RPC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-17 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-8598:
-
Attachment: HDFS-8598.001.patch

initial patch

 Add and optimize for get LocatedFileStatus  in DFSClient
 

 Key: HDFS-8598
 URL: https://issues.apache.org/jira/browse/HDFS-8598
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yong Zhang
Assignee: Yong Zhang
 Attachments: HDFS-8598.001.patch


 If we want to get all files block locations in one directory, we have to call 
 getFileBlockLocations for each file, it will take long time because of too 
 many request. 
 LocatedFileStatus has block location, but we can find it also call 
 getFileBlockLocations  for each file in DFSClient. this jira is trying to 
 optimize with only one RPC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7265) Use a throttler for replica write in datanode

2015-06-17 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7265:
-
Labels:   (was: BB2015-05-TBR)

 Use a throttler for replica write in datanode
 -

 Key: HDFS-7265
 URL: https://issues.apache.org/jira/browse/HDFS-7265
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7265_20141018.patch


 BlockReceiver process packets in BlockReceiver.receivePacket() as follows
 # read from socket
 # enqueue the ack
 # write to downstream
 # write to disk
 The above steps is repeated for each packet in a single thread.  When there 
 are a lot of concurrent writes in a datanode, the write time in #4 becomes 
 very long.  As a result, it leads to SocketTimeoutException since it cannot 
 read from the socket for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8621) Implement GETDELEGATIONTOKEN and GETDELEGATIONTOKENS operation for WebImageViewer

2015-06-17 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591251#comment-14591251
 ] 

Haohui Mai commented on HDFS-8621:
--

DT related calls are only for security purposes thus it should not be supported 
by the WebImageViewer.

 Implement GETDELEGATIONTOKEN and GETDELEGATIONTOKENS operation for 
 WebImageViewer
 -

 Key: HDFS-8621
 URL: https://issues.apache.org/jira/browse/HDFS-8621
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jagadesh Kiran N
Assignee: Jagadesh Kiran N

 In Hadoop 2.7.0, WebImageViewer supports the following operations:
 {code}
.GETFILESTATUS
.LISTSTATUS
.GETACLSTATUS
 {code}
 I'm thinking it would be better for administrators if  {code} 
 .GETDELEGATIONTOKEN  {code}  and  {code} .GETDELEGATIONTOKENS  {code}  are 
 supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8078) HDFS client gets errors trying to to connect to IPv6 DataNode

2015-06-17 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HDFS-8078:

Attachment: HDFS-8078.10.patch

No code change or JIRA needed for Yarn; turned out to just be a bad default for 
mapreduce.admin.reduce.child.java.opts and mapreduce.admin.map.child.java.opts 
which included -Djava.net.preferIPv4Stack=true.  It might be worth adding a 
global flag on this (perhaps related to HADOOP-11630 ? ) but it is not within 
the scope of this patch.

With that disabled in config, I've run IntegrationTestBigLinkedList 
successfully on a small (18 node) IPv6 only cluster.Resubmitting the patch 
rebased (no changes, though) to queue tests again.

 HDFS client gets errors trying to to connect to IPv6 DataNode
 -

 Key: HDFS-8078
 URL: https://issues.apache.org/jira/browse/HDFS-8078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.6.0
Reporter: Nate Edel
Assignee: Nate Edel
  Labels: BB2015-05-TBR, ipv6
 Attachments: HDFS-8078.10.patch, HDFS-8078.9.patch


 1st exception, on put:
 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 2401:db00:1010:70ba:face:0:8:0:50010
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
 Appears to actually stem from code in DataNodeID which assumes it's safe to 
 append together (ipaddr + : + port) -- which is OK for IPv4 and not OK for 
 IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
 requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
 Currently using InetAddress.getByName() to validate IPv6 (guava 
 InetAddresses.forString has been flaky) but could also use our own parsing. 
 (From logging this, it seems like a low-enough frequency call that the extra 
 object creation shouldn't be problematic, and for me the slight risk of 
 passing in bad input that is not actually an IPv4 or IPv6 address and thus 
 calling an external DNS lookup is outweighed by getting the address 
 normalized and avoiding rewriting parsing.)
 Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
 ---
 2nd exception (on datanode)
 15/04/13 13:18:07 ERROR datanode.DataNode: 
 dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
 operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
 /2401:db00:11:d010:face:0:2f:0:50010
 java.io.EOFException
 at java.io.DataInputStream.readShort(DataInputStream.java:315)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
 at java.lang.Thread.run(Thread.java:745)
 Which also comes as client error -get: 2401 is not an IP string literal.
 This one has existing parsing logic which needs to shift to the last colon 
 rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
 rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8078) HDFS client gets errors trying to to connect to IPv6 DataNode

2015-06-17 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HDFS-8078:

Status: Patch Available  (was: Open)

 HDFS client gets errors trying to to connect to IPv6 DataNode
 -

 Key: HDFS-8078
 URL: https://issues.apache.org/jira/browse/HDFS-8078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.6.0
Reporter: Nate Edel
Assignee: Nate Edel
  Labels: BB2015-05-TBR, ipv6
 Attachments: HDFS-8078.10.patch, HDFS-8078.9.patch


 1st exception, on put:
 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 2401:db00:1010:70ba:face:0:8:0:50010
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
 Appears to actually stem from code in DataNodeID which assumes it's safe to 
 append together (ipaddr + : + port) -- which is OK for IPv4 and not OK for 
 IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
 requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
 Currently using InetAddress.getByName() to validate IPv6 (guava 
 InetAddresses.forString has been flaky) but could also use our own parsing. 
 (From logging this, it seems like a low-enough frequency call that the extra 
 object creation shouldn't be problematic, and for me the slight risk of 
 passing in bad input that is not actually an IPv4 or IPv6 address and thus 
 calling an external DNS lookup is outweighed by getting the address 
 normalized and avoiding rewriting parsing.)
 Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
 ---
 2nd exception (on datanode)
 15/04/13 13:18:07 ERROR datanode.DataNode: 
 dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
 operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
 /2401:db00:11:d010:face:0:2f:0:50010
 java.io.EOFException
 at java.io.DataInputStream.readShort(DataInputStream.java:315)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
 at java.lang.Thread.run(Thread.java:745)
 Which also comes as client error -get: 2401 is not an IP string literal.
 This one has existing parsing logic which needs to shift to the last colon 
 rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
 rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer

2015-06-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591088#comment-14591088
 ] 

Akira AJISAKA commented on HDFS-8462:
-

Thanks [~jagadesh.kiran] for updating the patch. Mostly looks good to me.
I found that if the provided xattr.name is not found in the specified path, 
WebImageViewer returns 500 (Internal Server Error). In contrast, WebHDFS 
returns 403 (Forbidden) in the same condition. The reason is, if 
{{IOException}} occurs, {{FSImageHandler#exceptionCaught}} sets 500, and 
{{o.a.h.hdfs.web.ExceptionHandler#toResponse}} sets 403.
Would you fix the return code to 403 in {{FSImageHandler}} when IOException 
happens? We can safely fix it in this jira because IOException should not 
happen in the other operations (GETFILESTATUS, LISTSTATUS, GETACLSTATUS).

 Implement GETXATTRS and LISTXATTRS operation for WebImageViewer
 ---

 Key: HDFS-8462
 URL: https://issues.apache.org/jira/browse/HDFS-8462
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Akira AJISAKA
Assignee: Jagadesh Kiran N
 Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, 
 HDFS-8462-02.patch


 In Hadoop 2.7.0, WebImageViewer supports the following operations:
 * {{GETFILESTATUS}}
 * {{LISTSTATUS}}
 * {{GETACLSTATUS}}
 I'm thinking it would be better for administrators if {{GETXATTRS}} and 
 {{LISTXATTRS}} are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8621) Implement GETDELEGATIONTOKEN and GETDELEGATIONTOKENS operation for WebImageViewer

2015-06-17 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-8621:
--

 Summary: Implement GETDELEGATIONTOKEN and GETDELEGATIONTOKENS 
operation for WebImageViewer
 Key: HDFS-8621
 URL: https://issues.apache.org/jira/browse/HDFS-8621
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jagadesh Kiran N
Assignee: Jagadesh Kiran N


In Hadoop 2.7.0, WebImageViewer supports the following operations:
{code}
   .GETFILESTATUS
   .LISTSTATUS
   .GETACLSTATUS
{code}

I'm thinking it would be better for administrators if  {code} 
.GETDELEGATIONTOKEN  {code}  and  {code} .GETDELEGATIONTOKENS  {code}  are 
supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591144#comment-14591144
 ] 

Hudson commented on HDFS-8589:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8035 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8035/])
HDFS-8589. Fix unused imports in BPServiceActor and BlockReportLeaseManager 
(cmccabe) (cmccabe: rev 45ced38f10fcb9f831218b890786aaeb7987fed4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java


 Fix unused imports in BPServiceActor and BlockReportLeaseManager
 

 Key: HDFS-8589
 URL: https://issues.apache.org/jira/browse/HDFS-8589
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8589.001.patch


 Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591146#comment-14591146
 ] 

Hudson commented on HDFS-6249:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8035 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8035/])
HDFS-6249. Output AclEntry in PBImageXmlWriter. Contributed by surendra singh 
lilhore. (aajisaka: rev cc432885adb0182c2c5b3bf92edde12231fd567c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Output AclEntry in PBImageXmlWriter
 ---

 Key: HDFS-6249
 URL: https://issues.apache.org/jira/browse/HDFS-6249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: surendra singh lilhore
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-6249.patch, HDFS-6249_1.patch


 It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591148#comment-14591148
 ] 

Hudson commented on HDFS-8446:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8035 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8035/])
HDFS-8446. Separate safemode related operations in GetBlockLocations(). 
Contributed by Haohui Mai. (wheat9: rev 
015535dc0ad00c2ba357afb3d1e283e56ddda0d6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Separate safemode related operations in GetBlockLocations()
 ---

 Key: HDFS-8446
 URL: https://issues.apache.org/jira/browse/HDFS-8446
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
 HDFS-8446.002.patch, HDFS-8446.003.patch


 Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
 the NN is in SafeMode. This jira proposes to refactor the code to improve 
 readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591147#comment-14591147
 ] 

Hudson commented on HDFS-8615:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8035 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8035/])
HDFS-8615. Correct HTTP method in WebHDFS document. Contributed by Brahma Reddy 
Battula. (aajisaka: rev 1a169a26bcc4e4bab7697965906cb9ca7ef8329e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md


 Correct HTTP method in WebHDFS document
 ---

 Key: HDFS-8615
 URL: https://issues.apache.org/jira/browse/HDFS-8615
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-8615.patch


 For example, {{-X PUT}} should be removed from the following curl command.
 {code:title=WebHDFS.md}
 ### Get ACL Status
 * Submit a HTTP GET request.
 curl -i -X PUT 
 http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS
 {code}
 Other than this example, there are several commands which {{-X PUT}} should 
 be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591145#comment-14591145
 ] 

Hudson commented on HDFS-8238:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8035 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8035/])
HDFS-8238. Move ClientProtocol to the hdfs-client. Contributed by Takanobu 
Asanuma. (wheat9: rev b8327744884bf86b01b8998849e2a42fb9e5c249)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
Update CHANGES.txt for HDFS-8238. (wheat9: rev 
2de586f60ded874b2c962d0ca8ef2ca7cad25518)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Move ClientProtocol to the hdfs-client
 --

 Key: HDFS-8238
 URL: https://issues.apache.org/jira/browse/HDFS-8238
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
 HDFS-8238.002.patch, HDFS-8238.003.patch


 The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
 client. This jira proposes to move it into the hdfs-client module.
 The jira needs to move:
 * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
 {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
 package
 * Remove the reference of {{DistributedFileSystem}} in the javadoc
 * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
 {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >