[jira] [Updated] (HDFS-6945) BlockManager should remove a block from excessReplicateMap and decrement ExcessBlocks metric when the block is removed

2015-03-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6945:

Status: Patch Available  (was: Open)

 BlockManager should remove a block from excessReplicateMap and decrement 
 ExcessBlocks metric when the block is removed
 --

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Critical
  Labels: metrics
 Attachments: HDFS-6945-003.patch, HDFS-6945-004.patch, 
 HDFS-6945.2.patch, HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).
 After a further research, I noticed when deleting a block, BlockManager does 
 not remove the block from excessReplicateMap or decrement excessBlocksCount.
 Usually the metric is decremented when processing block report, however, if 
 the block has been deleted, BlockManager does not remove the block from 
 excessReplicateMap or decrement the metric.
 That way the metric and excessReplicateMap can increase infinitely (i.e. 
 memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8007) Lower RollingWindowManager log to debug

2015-03-29 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386082#comment-14386082
 ] 

Yi Liu commented on HDFS-8007:
--

+1, thanks Andrew.

 Lower RollingWindowManager log to debug
 ---

 Key: HDFS-8007
 URL: https://issues.apache.org/jira/browse/HDFS-8007
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Attachments: hdfs-8007.001.patch


 Noticed this while looking in a NN log, it's kind of spammy:
 {noformat}
 2015-03-25 00:04:12,052 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,134 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command * is: 8
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command rename is: 2
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command mkdirs is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCachePools is: 2
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCacheDirectives is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listStatus is: 5
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command getfileinfo is: 5
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 1
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command * is: 8
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command rename is: 2
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command mkdirs is: 1
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCachePools is: 2
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCacheDirectives is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listStatus is: 5
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command getfileinfo is: 5
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,139 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 0
 2015-03-25 00:04:12,139 INFO 
 

[jira] [Commented] (HDFS-7717) Erasure Coding: provide a tool for convert files between replication and erasure coding

2015-03-29 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386097#comment-14386097
 ] 

Kai Zheng commented on HDFS-7717:
-

bq.In order to reduce complexity of error handling, it may be better to do 
convert tasks as MR job.
I do believe this needs a good discussion before we can decide. In your view, 
which way would be most lightweight and easy to implement ? We can have easy 
one first, then more better one I guess ?
bq.f these information can be obtained from NN, we can implement converter tool 
as like Mover
Sure we can get EC schema and policy from NN for target files. The required 
interface and API are still under discussion. Before that, perhaps you can 
assume you have already got them for your work ? We will update this when we 
really get those parts ready. 

 Erasure Coding: provide a tool for convert files between replication and 
 erasure coding
 ---

 Key: HDFS-7717
 URL: https://issues.apache.org/jira/browse/HDFS-7717
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Kai Sasaki

 We need a tool to do offline conversion between replication and erasure 
 coding. The tool itself can either utilize MR just like the current distcp, 
 or act like the balancer/mover. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386118#comment-14386118
 ] 

Brahma Reddy Battula commented on HDFS-7060:


Yes, 002.patch uploaded by me :)

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7397) The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading

2015-03-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386135#comment-14386135
 ] 

Brahma Reddy Battula commented on HDFS-7397:


[~szetszwo] and [~cmccabe]  can we conclude something to closure of this 
issue..?

 The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading
 

 Key: HDFS-7397
 URL: https://issues.apache.org/jira/browse/HDFS-7397
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Minor

 For dfs.client.read.shortcircuit.streams.cache.size, is it in MB or KB?  
 Interestingly, it is neither in MB nor KB.  It is the number of shortcircuit 
 streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8007) Lower RollingWindowManager log to debug

2015-03-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386153#comment-14386153
 ] 

Vinayakumar B commented on HDFS-8007:
-

Seems like we have a duplicate for the exact same issue. 
HDFS-7890

 Lower RollingWindowManager log to debug
 ---

 Key: HDFS-8007
 URL: https://issues.apache.org/jira/browse/HDFS-8007
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Attachments: hdfs-8007.001.patch


 Noticed this while looking in a NN log, it's kind of spammy:
 {noformat}
 2015-03-25 00:04:12,052 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,134 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command * is: 8
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command rename is: 2
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command mkdirs is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCachePools is: 2
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCacheDirectives is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listStatus is: 5
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command getfileinfo is: 5
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 1
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command * is: 8
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command rename is: 2
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command mkdirs is: 1
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCachePools is: 2
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCacheDirectives is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listStatus is: 5
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command getfileinfo is: 5
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,139 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command 

[jira] [Commented] (HDFS-7997) The first non-existing xattr should also throw IOException

2015-03-29 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386177#comment-14386177
 ] 

Yi Liu commented on HDFS-7997:
--

[~sinago], the example you gave is not correct, the xattr should be prefixed 
with user/trusted/security/system/raw. But the patch looks good.

 The first non-existing xattr should also throw IOException
 --

 Key: HDFS-7997
 URL: https://issues.apache.org/jira/browse/HDFS-7997
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Attachments: HDFS-7997-001.patch


 We use the following code snippet to get/set xattrs. However, if there are no 
 xattrs have ever been set, the first getXAttr returns null and the second one 
 just throws exception with message like At least one of the attributes 
 provided was not found..  This is not expected, we believe they should 
 behave in the same way - i.e either both getXAttr returns null or both 
 getXAttr throw exception with the message ... not found.  We will provide a 
 patch to make them both throw exception.
 
 attrValueNM = fs.getXAttr(path, nm);
 if (attrValueNM == null) {
  fs.setXAttr(nm, DEFAULT_VALUE);
 }
 attrValueNN = fs.getXAttr(path, nn);
 if (attrValueNN == null) {
 fs.setXAttr(nn, DEFAULT_VALUE);
 }
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386087#comment-14386087
 ] 

Hudson commented on HDFS-6263:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7455 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7455/])
HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties. Contributed 
by Abhiraj Butala. (aajisaka: rev 257c77f895e8e4c3d8748909ebbd3ba7e7f880fc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HDFS-6263
 URL: https://issues.apache.org/jira/browse/HDFS-6263
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Abhiraj Butala
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-6263.patch


 HDFS-side of HADOOP-10525.
 {code}
 # uncomment the next line to limit number of backup files
 # log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 {code}
 In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the 
 above lines should be removed because the appender (DRFA) doesn't support 
 MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386110#comment-14386110
 ] 

Xinwei Qin  commented on HDFS-7060:
---

[~brahmareddy] Thanks for your review.
bq. some minor nits : intends are missed ( 2 spaces + 2 tabs)
The indent is just 2 spaces in Hadoop code format. So, I think the 002.patch 
format has no problem.
If not, please correct me.

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7997) The first non-existing xattr should also throw IOException

2015-03-29 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7997:
-
Priority: Minor  (was: Major)
Target Version/s: 2.8.0

 The first non-existing xattr should also throw IOException
 --

 Key: HDFS-7997
 URL: https://issues.apache.org/jira/browse/HDFS-7997
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Attachments: HDFS-7997-001.patch


 We use the following code snippet to get/set xattrs. However, if there are no 
 xattrs have ever been set, the first getXAttr returns null and the second one 
 just throws exception with message like At least one of the attributes 
 provided was not found..  This is not expected, we believe they should 
 behave in the same way - i.e either both getXAttr returns null or both 
 getXAttr throw exception with the message ... not found.  We will provide a 
 patch to make them both throw exception.
 
 attrValueNM = fs.getXAttr(path, nm);
 if (attrValueNM == null) {
  fs.setXAttr(nm, DEFAULT_VALUE);
 }
 attrValueNN = fs.getXAttr(path, nn);
 if (attrValueNN == null) {
 fs.setXAttr(nn, DEFAULT_VALUE);
 }
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6408) Remove redundant definitions in log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6408:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed v2 patch to trunk and branch-2. Thanks [~abutala] for the 
contribution!

 Remove redundant definitions in log4j.properties
 

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-6408-002.patch, HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager .

2015-03-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386158#comment-14386158
 ] 

Vinayakumar B commented on HDFS-7890:
-

Hi [~andreina],
I think the message better message could be like topN *users* size for command 
{} is: {}.
IMO it will be in sync with the javadoc description
{code} /**
   * Calculates the top N users over a time interval.
   * {code}

 Improve information on Top users for metrics in RollingWindowsManager .
 ---

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-7890.1.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager and lower log level

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386188#comment-14386188
 ] 

Hudson commented on HDFS-7890:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7457 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7457/])
HDFS-7890. Improve information on Top users for metrics in 
RollingWindowsManager and lower log level (Contributed by J.Andreina) 
(vinayakumarb: rev 1ed9fb76645ecd195afe0067497dca10a3fb997d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java


 Improve information on Top users for metrics in RollingWindowsManager and 
 lower log level
 -

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-7890.1.patch, HDFS-7890.2.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6408) Remove redundant definitions in log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6408:

Summary: Remove redundant definitions in log4j.properties  (was: Redundant 
definitions in log4j.properties)

 Remove redundant definitions in log4j.properties
 

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408-002.patch, HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager .

2015-03-29 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-7890:
-
Attachment: HDFS-7890.2.patch

Hi Vinayakumar ,  Uploaded the patch with changes. 
Please review.

 Improve information on Top users for metrics in RollingWindowsManager .
 ---

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-7890.1.patch, HDFS-7890.2.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6263:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~abutala] for the contribution.

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HDFS-6263
 URL: https://issues.apache.org/jira/browse/HDFS-6263
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Abhiraj Butala
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-6263.patch


 HDFS-side of HADOOP-10525.
 {code}
 # uncomment the next line to limit number of backup files
 # log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 {code}
 In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the 
 above lines should be removed because the appender (DRFA) doesn't support 
 MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager .

2015-03-29 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386162#comment-14386162
 ] 

J.Andreina commented on HDFS-7890:
--

Thanks Vinayakumar for reviewing the patch. 
I agree with you. Will update the patch soon.

 Improve information on Top users for metrics in RollingWindowsManager .
 ---

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-7890.1.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager and lower log level

2015-03-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7890:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Improve information on Top users for metrics in RollingWindowsManager and 
 lower log level
 -

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-7890.1.patch, HDFS-7890.2.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8007) Lower RollingWindowManager log to debug

2015-03-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8007:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 Lower RollingWindowManager log to debug
 ---

 Key: HDFS-8007
 URL: https://issues.apache.org/jira/browse/HDFS-8007
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Attachments: hdfs-8007.001.patch


 Noticed this while looking in a NN log, it's kind of spammy:
 {noformat}
 2015-03-25 00:04:12,052 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,134 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 1
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command * is: 8
 2015-03-25 00:04:12,135 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command rename is: 2
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command mkdirs is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCachePools is: 2
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCacheDirectives is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listStatus is: 5
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command getfileinfo is: 5
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,136 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 1
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command * is: 8
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command rename is: 2
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command mkdirs is: 1
 2015-03-25 00:04:12,137 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCachePools is: 2
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listCacheDirectives is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command listStatus is: 5
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command getfileinfo is: 5
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setTimes is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command delete is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command open is: 1
 2015-03-25 00:04:12,138 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command create is: 2
 2015-03-25 00:04:12,139 INFO 
 org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN 
 size for command setPermission is: 0
 2015-03-25 00:04:12,139 INFO 
 

[jira] [Commented] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager and lower log level

2015-03-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386180#comment-14386180
 ] 

Vinayakumar B commented on HDFS-7890:
-

+1 for latest patch.
Committed to trunk and branch-2.
Thanks [~andreina].

 Improve information on Top users for metrics in RollingWindowsManager and 
 lower log level
 -

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-7890.1.patch, HDFS-7890.2.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6945) BlockManager should remove a block from excessReplicateMap and decrement ExcessBlocks metric when the block is removed

2015-03-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386206#comment-14386206
 ] 

Hadoop QA commented on HDFS-6945:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707703/HDFS-6945-004.patch
  against trunk revision 3d9132d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10103//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10103//console

This message is automatically generated.

 BlockManager should remove a block from excessReplicateMap and decrement 
 ExcessBlocks metric when the block is removed
 --

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Critical
  Labels: metrics
 Attachments: HDFS-6945-003.patch, HDFS-6945-004.patch, 
 HDFS-6945.2.patch, HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).
 After a further research, I noticed when deleting a block, BlockManager does 
 not remove the block from excessReplicateMap or decrement excessBlocksCount.
 Usually the metric is decremented when processing block report, however, if 
 the block has been deleted, BlockManager does not remove the block from 
 excessReplicateMap or decrement the metric.
 That way the metric and excessReplicateMap can increase infinitely (i.e. 
 memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7717) Erasure Coding: provide a tool for convert files between replication and erasure coding

2015-03-29 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386063#comment-14386063
 ] 

Kai Sasaki commented on HDFS-7717:
--

[~drankye] Thank you for comment. In order to reduce complexity of error 
handling, it may be better to do convert tasks as MR job. So I am also 
considering how to get meta data of each file such as EC schema and policy. If 
these information can be obtained from NN, we can implement converter tool as 
like {{Mover}}. Does it make sense? If there are any good idea about getting EC 
information from NN, can you share with me?
Thank you.

 Erasure Coding: provide a tool for convert files between replication and 
 erasure coding
 ---

 Key: HDFS-7717
 URL: https://issues.apache.org/jira/browse/HDFS-7717
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Kai Sasaki

 We need a tool to do offline conversion between replication and erasure 
 coding. The tool itself can either utilize MR just like the current distcp, 
 or act like the balancer/mover. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6408) Redundant definitions in log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386081#comment-14386081
 ] 

Akira AJISAKA commented on HDFS-6408:
-

+1 (binding). The test failure looks unrelated.

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Srikanth Sundarrajan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386108#comment-14386108
 ] 

Srikanth Sundarrajan commented on HDFS-7060:


To avoid locks on FsDatasetSpi::getStorageReports() seems like a reasonable 
approach and would unblock the heartbeat to DN. 

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6408) Remove redundant definitions in log4j.properties

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386107#comment-14386107
 ] 

Hudson commented on HDFS-6408:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7456 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7456/])
HDFS-6408. Remove redundant definitions in log4j.properties. Contributed by 
Abhiraj Butala. (aajisaka: rev 232eca944a721c62f37e9012546a7fa814da6e01)
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove redundant definitions in log4j.properties
 

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-6408-002.patch, HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7997) The first non-existing xattr should also throw IOException

2015-03-29 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386205#comment-14386205
 ] 

zhouyingchao commented on HDFS-7997:


Thank you for pointing it out.  Actually we are using the name of user.xxx, 
the pseudo code snippet here is just used to explain the issue.

 The first non-existing xattr should also throw IOException
 --

 Key: HDFS-7997
 URL: https://issues.apache.org/jira/browse/HDFS-7997
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
Priority: Minor
 Attachments: HDFS-7997-001.patch


 We use the following code snippet to get/set xattrs. However, if there are no 
 xattrs have ever been set, the first getXAttr returns null and the second one 
 just throws exception with message like At least one of the attributes 
 provided was not found..  This is not expected, we believe they should 
 behave in the same way - i.e either both getXAttr returns null or both 
 getXAttr throw exception with the message ... not found.  We will provide a 
 patch to make them both throw exception.
 
 attrValueNM = fs.getXAttr(path, nm);
 if (attrValueNM == null) {
  fs.setXAttr(nm, DEFAULT_VALUE);
 }
 attrValueNN = fs.getXAttr(path, nn);
 if (attrValueNN == null) {
 fs.setXAttr(nn, DEFAULT_VALUE);
 }
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6408) Redundant definitions in log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386092#comment-14386092
 ] 

Akira AJISAKA commented on HDFS-6408:
-

The conflict is caused by HDFS-6263. Rebasing the patch and will commit it.

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6408) Redundant definitions in log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6408:

Attachment: HDFS-6408-002.patch

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408-002.patch, HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6408) Redundant definitions in log4j.properties

2015-03-29 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6408:

  Component/s: test
 Target Version/s: 2.8.0
Affects Version/s: 2.6.0
 Hadoop Flags: Reviewed

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408-002.patch, HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7890) Improve information on Top users for metrics in RollingWindowsManager and lower log level

2015-03-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7890:

Summary: Improve information on Top users for metrics in 
RollingWindowsManager and lower log level  (was: Improve information on Top 
users for metrics in RollingWindowsManager .)

 Improve information on Top users for metrics in RollingWindowsManager and 
 lower log level
 -

 Key: HDFS-7890
 URL: https://issues.apache.org/jira/browse/HDFS-7890
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-7890.1.patch, HDFS-7890.2.patch


 Information on Top users for metrics in RollingWindowsManager should be 
 improved and can be moved to DEBUG.
 Currently it is INFO logs at namenode side and does not provide much 
 information. 
 {noformat}
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 finalizeRollingUpgrade is: 1
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command 
 startRollingUpgrade is: 0
 15/03/04 13:21:02 INFO window.RollingWindowManager: topN size for command * 
 is: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385790#comment-14385790
 ] 

Hudson commented on HDFS-7501:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2079 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2079/])
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan. (harsh: rev 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java


 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
 Fix For: 2.8.0

 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6408) Redundant definitions in log4j.properties

2015-03-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386084#comment-14386084
 ] 

Hadoop QA commented on HDFS-6408:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645399/HDFS-6408.patch
  against trunk revision 257c77f.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10104//console

This message is automatically generated.

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386122#comment-14386122
 ] 

Xinwei Qin  commented on HDFS-7060:
---

2nd patch

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6408) Remove redundant definitions in log4j.properties

2015-03-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386120#comment-14386120
 ] 

Hadoop QA commented on HDFS-6408:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12708070/HDFS-6408-002.patch
  against trunk revision 257c77f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10105//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10105//console

This message is automatically generated.

 Remove redundant definitions in log4j.properties
 

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-6408-002.patch, HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386121#comment-14386121
 ] 

Xinwei Qin  commented on HDFS-7060:
---

Sorry, my mistake. I mean the 2rd patch, which should be 001.patch.

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7717) Erasure Coding: provide a tool for convert files between replication and erasure coding

2015-03-29 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386218#comment-14386218
 ] 

Kai Sasaki commented on HDFS-7717:
--

If there is an API provided by NN for getting meta information such as EC 
scheme and policy, a like mover tool which is done as MR job might be easier 
one. This course depends on the implementation of NN API.

And second one is that a converter itself does convert, error handling and 
reconstruction. This course requires a lot of implementation more than the 
first course. But I am not sure this is the better one. Due to the complexity, 
I think this might not be good idea anyway. 

 Erasure Coding: provide a tool for convert files between replication and 
 erasure coding
 ---

 Key: HDFS-7717
 URL: https://issues.apache.org/jira/browse/HDFS-7717
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Kai Sasaki

 We need a tool to do offline conversion between replication and erasure 
 coding. The tool itself can either utilize MR just like the current distcp, 
 or act like the balancer/mover. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8011) standby nn can't started

2015-03-29 Thread fujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fujie updated HDFS-8011:

Attachment: QQ图片20150329201426.png

 standby nn can't started
 

 Key: HDFS-8011
 URL: https://issues.apache.org/jira/browse/HDFS-8011
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.3.0
 Environment: centeros 6.2  64bit 
Reporter: fujie
 Attachments: QQ图片20150329201426.png


 1.after active nn was dead ,the standby nn turn  active(use zkfc)
 2.and then we start  the new  standby nn,ocurr an FAtal。
 the standby nn can't work。



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8011) standby nn can't started

2015-03-29 Thread fujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fujie updated HDFS-8011:

Attachment: (was: QQ图片20150329201426.png)

 standby nn can't started
 

 Key: HDFS-8011
 URL: https://issues.apache.org/jira/browse/HDFS-8011
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.3.0
 Environment: centeros 6.2  64bit 
Reporter: fujie

 1.after active nn was dead ,the standby nn turn  active(use zkfc)
 2.and then we start  the new  standby nn,ocurr an FAtal。
 the standby nn can't work。



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385762#comment-14385762
 ] 

Hudson commented on HDFS-7501:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #147 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/147/])
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan. (harsh: rev 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java


 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
 Fix For: 2.8.0

 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385771#comment-14385771
 ] 

Hudson commented on HDFS-7501:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2097 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2097/])
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan. (harsh: rev 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
 Fix For: 2.8.0

 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8011) standby nn can't started

2015-03-29 Thread fujie (JIRA)
fujie created HDFS-8011:
---

 Summary: standby nn can't started
 Key: HDFS-8011
 URL: https://issues.apache.org/jira/browse/HDFS-8011
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.3.0
 Environment: centeros 6.2  64bit 
Reporter: fujie


1.after active nn was dead ,the standby nn turn  active(use zkfc)

2.and then we start  the new  standby nn,ocurr an FAtal。

the standby nn can't work。





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8011) standby nn can't started

2015-03-29 Thread fujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fujie updated HDFS-8011:

Attachment: namenode log.jpg

upload the error log .

 standby nn can't started
 

 Key: HDFS-8011
 URL: https://issues.apache.org/jira/browse/HDFS-8011
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.3.0
 Environment: centeros 6.2  64bit 
Reporter: fujie
 Attachments: namenode log.jpg


 1.after active nn was dead ,the standby nn turn  active(use zkfc)
 2.and then we start  the new  standby nn,ocurr an FAtal。
 the standby nn can't work。



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385749#comment-14385749
 ] 

Hudson commented on HDFS-7501:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/138/])
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan. (harsh: rev 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
 Fix For: 2.8.0

 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8000) Potential deadlock #HeartbeatManager

2015-03-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385843#comment-14385843
 ] 

Brahma Reddy Battula commented on HDFS-8000:


Thanks for taking look into this issue...
{quote}
It appears that you have debug logging enabled for block state changes. Is that 
right?
{quote}

Yes, As DN is not getting registering with NN, I had started NN in debug 
mode...Since I configured 48 datadir's and have 9000+ Blocks,,DN is taking 
more time for block scanning and cluster is non-functional..
{quote}
Perhaps the logging could be moved outside of the lock, similar to what we've 
done in other recent patches.
{quote}
Yes,.Moving out always Good..

Here one more question is NN is taking 50 mins to load 7 GB fsimage..Which I 
need to look more...




 Potential deadlock #HeartbeatManager
 

 Key: HDFS-8000
 URL: https://issues.apache.org/jira/browse/HDFS-8000
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: NNtd.out


 Cluster loaded with 9000+ Blocks
 Restart DN
  access NN UI..Then NN will  got Hang
 Will attach td..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7060:
---
Priority: Critical  (was: Major)

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385856#comment-14385856
 ] 

Brahma Reddy Battula commented on HDFS-7060:


[~xinwei] thanks for working on this issue...Patch overall look goods to me..+1 
( non binding)

some minor nits : intends are missed ( 2 spaces + 2 tabs).. Attaching patch for 
same..

[~wheat9]  can you please give comments..? 
 I had seen  problems ( where DN's are going to Dead state) in big cluster 
where DN is having 9000+ Blocks and 48 data dirs configured...

Please correct me If I am wrong..Marking this as critical,change back if you 
feel not required...

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7060:
---
Attachment: HDFS-7060-002.patch

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-03-29 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385703#comment-14385703
 ] 

Rakesh R commented on HDFS-7949:


I'd like to take this ahead. Below is my initial thought and could you please 
have a look at this when you get a chance. Thanks!

For the file size computation we should consider both data blocks and partity 
blocks (for a m + k striped block i.e., m data blocks and k parity blocks). 
Since our current strip {{chunkSize}} is fixed (64KB), we should be able to 
compute the file size on m, k, and numBytes. I think, we can do the computation 
similar to the {{spaceConsumed}} logic. Please correct me if I'm missing 
anything.

{code}
  for (StripedBlockProto p : f.getStripedBlocks().getBlocksList()) {
size += ((p.getBlock().getNumBytes() - 1) / (p.getDataBlockNum() * 
chunkSize) + 1)
* chunkSize * p.getParityBlockNum() + p.getBlock().getNumBytes();
  }
{code}

 WebImageViewer need support file size calculation with striped blocks
 -

 Key: HDFS-7949
 URL: https://issues.apache.org/jira/browse/HDFS-7949
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng
Assignee: Rakesh R
Priority: Minor

 The file size calculation should be changed when the blocks of the file are 
 striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385698#comment-14385698
 ] 

Hudson commented on HDFS-7501:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #147 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/147/])
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan. (harsh: rev 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
 Fix For: 2.8.0

 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385696#comment-14385696
 ] 

Hudson commented on HDFS-7501:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #881 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/881/])
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan. (harsh: rev 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
 Fix For: 2.8.0

 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7060) Contentions of the monitor of FsDatasetImpl block DN's heartbeat

2015-03-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385939#comment-14385939
 ] 

Hadoop QA commented on HDFS-7060:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12708051/HDFS-7060-002.patch
  against trunk revision 3d9132d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10102//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10102//console

This message is automatically generated.

 Contentions of the monitor of FsDatasetImpl block DN's heartbeat
 

 Key: HDFS-7060
 URL: https://issues.apache.org/jira/browse/HDFS-7060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Xinwei Qin 
Priority: Critical
 Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
 HDFS-7060.001.patch


 We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
 when the DN is under heavy load of writes:
 {noformat}
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
 - locked 0x000780612fd8 (a java.lang.Object)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
 - waiting to lock 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:744)
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1006)
 at 
 org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
 - locked 0x000780304fb8 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:169)
 at 
 

[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-03-29 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386023#comment-14386023
 ] 

Kai Zheng commented on HDFS-7285:
-

bq.At this stage, I think extended storage policy is a good term to use in our 
APIs (maybe we can abbreviate it as XStoragePolicy).
Sounds good to me. {{XStoragePolicy}} is nice. 

 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
 HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
 HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support different EC coding schemes, implementations and policies for 
 different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
 ISA-L library), an implementation can greatly improve the performance of EC 
 encoding/decoding and makes the EC solution even more attractive. We will 
 post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7722) DataNode#checkDiskError should also remove Storage when error is found.

2015-03-29 Thread Joe Pallas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386029#comment-14386029
 ] 

Joe Pallas commented on HDFS-7722:
--

Sure, [~eddyxu].  HDFS-5194 is about improving support for alternative storage 
implementations.  Assuming that volumes always correspond to directories in the 
local file system limits the ability to implement other storage architectures, 
such as a directly attached object store or perhaps a block-level device with a 
lightweight user-level file system layer optimized for storing block replicas.

The {{FsDatasetSpi}} interface tries to abstract out the essentials of storing 
replicas, and {{FsVolumeSpi}} is an abstract unit of storage used by the 
dataset to represent some subset of all the available storage (typically a 
single drive in the default implementation).  Advertising that volumes are 
directories doesn't just limit alternative implementations, it also makes it 
harder to evolve the default implementation, because the scope of changes is 
harder to determine once implementation details leak through the abstraction.

That's my perspective.  Maintaining these abstractions takes some some work, 
but it has benefits for readability/maintainability of the default 
implementation as well as for alternative implementations.

 DataNode#checkDiskError should also remove Storage when error is found.
 ---

 Key: HDFS-7722
 URL: https://issues.apache.org/jira/browse/HDFS-7722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.7.0

 Attachments: HDFS-7722.000.patch, HDFS-7722.001.patch, 
 HDFS-7722.002.patch, HDFS-7722.003.patch, HDFS-7722.004.patch


 When {{DataNode#checkDiskError}} found disk errors, it removes all block 
 metadatas from {{FsDatasetImpl}}. However, it does not removed the 
 corresponding {{DataStorage}} and {{BlockPoolSliceStorage}}. 
 The result is that, we could not directly run {{reconfig}} to hot swap the 
 failure disks without changing the configure file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)