[jira] [Created] (HDFS-9035) Remove the max size conf of xattr

2015-09-08 Thread Yi Liu (JIRA)
Yi Liu created HDFS-9035:


 Summary: Remove the max size conf of xattr
 Key: HDFS-9035
 URL: https://issues.apache.org/jira/browse/HDFS-9035
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


As discussed in HDFS-8900, now we have the max size hard-limit, and we can 
remove the max size config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9033) In a metasave file, "NaN" is getting printed for cacheused%

2015-09-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9033:
---
Attachment: HDFS-9033.patch

> In a metasave file, "NaN" is getting printed for cacheused%
> ---
>
> Key: HDFS-9033
> URL: https://issues.apache.org/jira/browse/HDFS-9033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9033.patch
>
>
> In a metasave file, "NaN" is getting printed for cacheused% --
> For metasave file --
> hdfs dfsadmin -metasave fnew
> vi fnew
> Metasave: Number of datanodes: 3
> DN1:50076 IN 211378954240(196.86 GB) 2457942(2.34 MB) 0.00% 
> 185318637568(172.59 GB) 0(0 B) 0(0 B) {color:red}NaN% {color}0(0 B) Mon Sep 
> 07 17:22:42
> In DN report, Cache is  -
> hdfs dfsadmin -report
> Decommission Status : Normal
> Configured Capacity: 211378954240 (196.86 GB)
> DFS Used: 3121152 (2.98 MB)
> Non DFS Used: 16376107008 (15.25 GB)
> DFS Remaining: 194999726080 (181.61 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 92.25%
> {color:red}
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8953) DataNode Metrics logging

2015-09-08 Thread Kanaka Kumar Avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanaka Kumar Avvaru updated HDFS-8953:
--
Attachment: HDFS-8953-02.patch

> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8704) Erasure Coding: client fails to write large file when one datanode fails

2015-09-08 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8704:

Attachment: HDFS-8704-HDFS-7285-008.patch

> Erasure Coding: client fails to write large file when one datanode fails
> 
>
> Key: HDFS-8704
> URL: https://issues.apache.org/jira/browse/HDFS-8704
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8704-000.patch, HDFS-8704-HDFS-7285-002.patch, 
> HDFS-8704-HDFS-7285-003.patch, HDFS-8704-HDFS-7285-004.patch, 
> HDFS-8704-HDFS-7285-005.patch, HDFS-8704-HDFS-7285-006.patch, 
> HDFS-8704-HDFS-7285-007.patch, HDFS-8704-HDFS-7285-008.patch
>
>
> I test current code on a 5-node cluster using RS(3,2).  When a datanode is 
> corrupt, client succeeds to write a file smaller than a block group but fails 
> to write a large one. {{TestDFSStripeOutputStreamWithFailure}} only tests 
> files smaller than a block group, this jira will add more test situations.
> A streamer may encounter some bad datanodes when writing blocks allocated to 
> it. When it fails to connect datanode or send a packet, the streamer needs to 
> prepare for the next block. First it removes the packets of current  block 
> from its data queue. If the first packet of next block has already been in 
> the data queue, the streamer will reset its state and start to wait for the 
> next block allocated for it; otherwise it will just wait for the first packet 
> of next block. The streamer will check periodically if it is asked to 
> terminate during its waiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9033) In a metasave file, "NaN" is getting printed for cacheused%

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734442#comment-14734442
 ] 

Hadoop QA commented on HDFS-9033:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 41s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m  2s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  86m 32s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 135m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.datanode.TestDataNodeFSDataSetSink |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSetTimes |
|   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.fs.TestSWebHdfsFileContextMainOperations |
|   | hadoop.hdfs.TestIsMethodSupported |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
|
|   | hadoop.hdfs.TestFileCreationClient |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.fs.TestHDFSFileContextMainOperations |
|   | hadoop.hdfs.TestDFSStartupVersions |
|   | hadoop.hdfs.TestParallelShortCircuitLegacyRead |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.datanode.TestSimulatedFSDataset |
|   | hadoop.hdfs.server.datanode.TestTransferRbw |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
|   | hadoop.hdfs.TestLocalDFS |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.datanode.TestDataStorage |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.TestSmallBlock |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.tracing.TestTracing |
|   | 

[jira] [Commented] (HDFS-8833) Erasure coding: store EC schema and cell size in INodeFile and eliminate notion of EC zones

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734360#comment-14734360
 ] 

Hadoop QA commented on HDFS-8833:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 11s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 25 new or modified test files. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 54s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  0s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 32s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 35s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  92m 27s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 21s | Tests failed in 
hadoop-hdfs-client. |
| | | 138m 20s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSetTimes |
|   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.fs.TestSWebHdfsFileContextMainOperations |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestIsMethodSupported |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
|
|   | hadoop.hdfs.TestFileCreationClient |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.fs.TestHDFSFileContextMainOperations |
|   | hadoop.hdfs.TestDFSStartupVersions |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters |
|   | hadoop.hdfs.TestParallelShortCircuitLegacyRead |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.datanode.TestTransferRbw |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
|   | hadoop.hdfs.TestLocalDFS |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.namenode.TestFSImageWithAcl |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
|   | 

[jira] [Commented] (HDFS-8704) Erasure Coding: client fails to write large file when one datanode fails

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734472#comment-14734472
 ] 

Hadoop QA commented on HDFS-8704:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 54s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 14s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 33s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 39s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  18m 16s | Tests failed in hadoop-hdfs. |
| | |  61m 16s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.namenode.TestINodeFile |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestParallelImageWrite |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocks |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.TestFileCreationEmpty |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.server.namenode.TestStorageRestore |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSetTimes |
|   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.namenode.TestFSPermissionChecker |
|   | hadoop.hdfs.TestDFSFinalize |
|   | hadoop.hdfs.server.namenode.TestSecureNameNode |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestFsShellPermission |
|   | hadoop.hdfs.server.namenode.TestMalformedURLs |
|   | 

[jira] [Updated] (HDFS-9033) In a metasave file, "NaN" is getting printed for cacheused%

2015-09-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9033:
---
Status: Patch Available  (was: Open)

> In a metasave file, "NaN" is getting printed for cacheused%
> ---
>
> Key: HDFS-9033
> URL: https://issues.apache.org/jira/browse/HDFS-9033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9033.patch
>
>
> In a metasave file, "NaN" is getting printed for cacheused% --
> For metasave file --
> hdfs dfsadmin -metasave fnew
> vi fnew
> Metasave: Number of datanodes: 3
> DN1:50076 IN 211378954240(196.86 GB) 2457942(2.34 MB) 0.00% 
> 185318637568(172.59 GB) 0(0 B) 0(0 B) {color:red}NaN% {color}0(0 B) Mon Sep 
> 07 17:22:42
> In DN report, Cache is  -
> hdfs dfsadmin -report
> Decommission Status : Normal
> Configured Capacity: 211378954240 (196.86 GB)
> DFS Used: 3121152 (2.98 MB)
> Non DFS Used: 16376107008 (15.25 GB)
> DFS Remaining: 194999726080 (181.61 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 92.25%
> {color:red}
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9033) In a metasave file, "NaN" is getting printed for cacheused%

2015-09-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734304#comment-14734304
 ] 

Brahma Reddy Battula commented on HDFS-9033:


{{DFSusedPercent}} is also having same issue..Hence corrected for 
{{DFSusedPercent}} and {{cacheUsedPercent}}, and modified the 
testcases..Uploaded the patch..Kindly review.. thanks..

> In a metasave file, "NaN" is getting printed for cacheused%
> ---
>
> Key: HDFS-9033
> URL: https://issues.apache.org/jira/browse/HDFS-9033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9033.patch
>
>
> In a metasave file, "NaN" is getting printed for cacheused% --
> For metasave file --
> hdfs dfsadmin -metasave fnew
> vi fnew
> Metasave: Number of datanodes: 3
> DN1:50076 IN 211378954240(196.86 GB) 2457942(2.34 MB) 0.00% 
> 185318637568(172.59 GB) 0(0 B) 0(0 B) {color:red}NaN% {color}0(0 B) Mon Sep 
> 07 17:22:42
> In DN report, Cache is  -
> hdfs dfsadmin -report
> Decommission Status : Normal
> Configured Capacity: 211378954240 (196.86 GB)
> DFS Used: 3121152 (2.98 MB)
> Non DFS Used: 16376107008 (15.25 GB)
> DFS Remaining: 194999726080 (181.61 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 92.25%
> {color:red}
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-08 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734318#comment-14734318
 ] 

Kanaka Kumar Avvaru commented on HDFS-8953:
---

Setting NullAppender in test log4j.properties caused the test case failure. So 
changed patch to use actual appender.

> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-09-08 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734521#comment-14734521
 ] 

Rui Li commented on HDFS-8968:
--

Failed tests don't seem related. The patch here doesn't modify any existing 
code.

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9033) In a metasave file, "NaN" is getting printed for cacheused%

2015-09-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734715#comment-14734715
 ] 

Vinayakumar B commented on HDFS-9033:
-

Changes looks good.
+1, Pending jenkins
Previous Jenkins report looks flaky. triggered one more.

> In a metasave file, "NaN" is getting printed for cacheused%
> ---
>
> Key: HDFS-9033
> URL: https://issues.apache.org/jira/browse/HDFS-9033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9033.patch
>
>
> In a metasave file, "NaN" is getting printed for cacheused% --
> For metasave file --
> hdfs dfsadmin -metasave fnew
> vi fnew
> Metasave: Number of datanodes: 3
> DN1:50076 IN 211378954240(196.86 GB) 2457942(2.34 MB) 0.00% 
> 185318637568(172.59 GB) 0(0 B) 0(0 B) {color:red}NaN% {color}0(0 B) Mon Sep 
> 07 17:22:42
> In DN report, Cache is  -
> hdfs dfsadmin -report
> Decommission Status : Normal
> Configured Capacity: 211378954240 (196.86 GB)
> DFS Used: 3121152 (2.98 MB)
> Non DFS Used: 16376107008 (15.25 GB)
> DFS Remaining: 194999726080 (181.61 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 92.25%
> {color:red}
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-08 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734714#comment-14734714
 ] 

Kanaka Kumar Avvaru commented on HDFS-8953:
---

test failures doesn't seem to be related to this patch changes. 
{{MetricsLoggerTask.java:41: First sentence should end with a period}}- check 
style issue and whitespace issue I will fix  along with comments on patch from 
reviewers.

> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-09-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734671#comment-14734671
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8287:
---

{code}
  if (bufReady == null) {
try {
  wait();
} catch(InterruptedException ie) {
  throw DFSUtil.toInterruptedIOException("flip interrupted.", ie);
}
  }
{code}
It is better to replace "if (bufReady == null)" above with "while (bufReady == 
null)" although using "if" is correct in this case.


> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734639#comment-14734639
 ] 

Hadoop QA commented on HDFS-8953:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  22m 56s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   9m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 52s | The applied patch generated  3 
new checkstyle issues (total was 719, now 717). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 54s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 188m 45s | Tests failed in hadoop-hdfs. |
| | | 264m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754584/HDFS-8953-02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6f72f1e |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12336/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12336/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12336/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12336/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12336/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12336/console |


This message was automatically generated.

> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8976) Create HTML5 cluster webconsole for federated cluster

2015-09-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8976:

Attachment: HDFS-8976-02.patch

Attached the patch with the client side implementation of fetching cluster 
stats.

This is still WIP patch. not ready for final commit.

Feedback will be really appreciated.

> Create HTML5 cluster webconsole for federated cluster
> -
>
> Key: HDFS-8976
> URL: https://issues.apache.org/jira/browse/HDFS-8976
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8976-01.patch, HDFS-8976-02.patch, 
> cluster-health.JPG
>
>
> Since the old jsp variant of cluster web console is no longer present from 
> 2.7 onwards, there is a need for HTML 5 web console for overview of overall 
> cluster.
> 2.7.1 docs says to check webconsole as below {noformat}Similar to the 
> Namenode status web page, when using federation a Cluster Web Console is 
> available to monitor the federated cluster at 
> http:///dfsclusterhealth.jsp. Any Namenode in the cluster 
> can be used to access this web page.{noformat}
> But this is no longer present,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-09-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734591#comment-14734591
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8287:
---

> ... we needed to copy current cell buffer for ParityGenerateTask. If there 
> are any better way to avoid double using of cell buffer rather than copying, 
> please let me know that. ...

(I somehow overlooked your comment earlier.)  We could avoid copying buffers 
using wait-nodify; see below

{code}
synchronized CellBuffers flip() throws InterruptedIOException { //renamed 
from setReadyToCurrent
  if (bufReady == null) {
try {
  wait();
} catch(InterruptedException ie) {
  throw DFSUtil.toInterruptedIOException("flip interrupted.", ie);
}
  }
  CellBuffers tmp = bufCurrent;
  bufCurrent = bufReady;
  bufCurrent.clear();
  bufReady = null;
  return tmp;
}

synchronized void releaseBuf(CellBuffers buf) {
  bufReady = buf;
  notifyAll();
}

  //we also need to synchronized, check null and wait-nodify in other methods
{code}

{code}
  void writeParityCells() throws IOException {
final CellBuffers cb = doubleCellBuffer.flip();
//encode the data cells
final ByteBuffer[] buffers = cb.getBuffers();
final byte[][] checkSumArrays = 
doubleCellBuffer.getCheckSumArrays().clone();
// Create parity packets asynchronously.
completionService.submit(new Callable() {
  @Override
  public Void call() throws Exception {
encode(encoder, numDataBlocks, buffers);
for (int i = numDataBlocks; i < numAllBlocks; i++) {
  try {
writeParity(i, buffers[i], checkSumArrays[i]);
  } catch (IOException e) {
LOG.warn("Caught exception ", e);
  }
}
return null;
  }
});
doubleCellBuffer.releaseBuf(cb);
  }
{code}


> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8976) Create HTML5 cluster webconsole for federated cluster

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734614#comment-14734614
 ] 

Hadoop QA commented on HDFS-8976:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   2m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 56s | Site still builds. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   6m 20s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754614/HDFS-8976-02.patch |
| Optional Tests | site |
| git revision | trunk / 435f935 |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12338/console |


This message was automatically generated.

> Create HTML5 cluster webconsole for federated cluster
> -
>
> Key: HDFS-8976
> URL: https://issues.apache.org/jira/browse/HDFS-8976
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8976-01.patch, HDFS-8976-02.patch, 
> cluster-health.JPG
>
>
> Since the old jsp variant of cluster web console is no longer present from 
> 2.7 onwards, there is a need for HTML 5 web console for overview of overall 
> cluster.
> 2.7.1 docs says to check webconsole as below {noformat}Similar to the 
> Namenode status web page, when using federation a Cluster Web Console is 
> available to monitor the federated cluster at 
> http:///dfsclusterhealth.jsp. Any Namenode in the cluster 
> can be used to access this web page.{noformat}
> But this is no longer present,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8833) Erasure coding: store EC schema and cell size in INodeFile and eliminate notion of EC zones

2015-09-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734538#comment-14734538
 ] 

Kai Zheng commented on HDFS-8833:
-

Hi [~zhz],

Thanks for your update. We experimentally tried the large patch on a cluster 
and it works fine. 

One observation is, looks like we support to set ec policy on the file system 
root '/', then how to unset the policy or allow to store some files in 
replication in the same cluster? If not possible, then the setting may be 
affecting too much.

A quick check of the codes for minor things. Better to be fixed. Thanks.
{noformat}
grep -i zone HDFS-8833-HDFS-7285.07.patch |grep '^+'
+  fail("Erasure coding zone on non-empty dir");
+  assertExceptionContains("erasure coding zone for a non-empty directory", 
e);
+.setErasureCodingPolicy("/eczone", null);
+  dfs.setErasureCodingPolicy(zone, null);
{noformat}

> Erasure coding: store EC schema and cell size in INodeFile and eliminate 
> notion of EC zones
> ---
>
> Key: HDFS-8833
> URL: https://issues.apache.org/jira/browse/HDFS-8833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8833-HDFS-7285-merge.00.patch, 
> HDFS-8833-HDFS-7285-merge.01.patch, HDFS-8833-HDFS-7285.02.patch, 
> HDFS-8833-HDFS-7285.03.patch, HDFS-8833-HDFS-7285.04.patch, 
> HDFS-8833-HDFS-7285.05.patch, HDFS-8833-HDFS-7285.06.patch, 
> HDFS-8833-HDFS-7285.07.patch
>
>
> We have [discussed | 
> https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14357754=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14357754]
>  storing EC schema with files instead of EC zones and recently revisited the 
> discussion under HDFS-8059.
> As a recap, the _zone_ concept has severe limitations including renaming and 
> nested configuration. Those limitations are valid in encryption for security 
> reasons and it doesn't make sense to carry them over in EC.
> This JIRA aims to store EC schema and cell size on {{INodeFile}} level. For 
> simplicity, we should first implement it as an xattr and consider memory 
> optimizations (such as moving it to file header) as a follow-on. We should 
> also disable changing EC policy on a non-empty file / dir in the first phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8929) Add a metric to expose the timestamp of the last journal

2015-09-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734747#comment-14734747
 ] 

Vinayakumar B commented on HDFS-8929:
-

Changes looks great.

Just one more improvement in Test.

Timestamp update can be verified after sending every edits in 
TestJournalNode#testJournal().

+1 once addressed.

> Add a metric to expose the timestamp of the last journal
> 
>
> Key: HDFS-8929
> URL: https://issues.apache.org/jira/browse/HDFS-8929
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: journal-node
>Reporter: Akira AJISAKA
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8929-001.patch, HDFS-8929-002.patch, 
> HDFS-8929-003.patch
>
>
> If there are three JNs and only one JN is failing to journal, we can detect 
> it by monitoring the difference of the last written transaction id among JNs 
> from NN WebUI or JN metrics. However, it's difficult to define the threshold 
> to alert because the increase rate of the number of transaction depends on 
> how busy the cluster is. Therefore I'd like to propose a metric to expose the 
> timestamp of the last journal. That way we can easily alert if a JN is 
> failing to journal for some fixed period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2015-09-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735356#comment-14735356
 ] 

Colin Patrick McCabe commented on HDFS-9005:


Hi [~mingma],

Why not use an XML file to set the upgrade domain for each DataNode instead of 
a script?  Spawning scripts has been awkward in the past since it's slow and 
they can fail.

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9010:

Status: Patch Available  (was: Open)

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9027:

Status: Patch Available  (was: Open)

> Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
> ---
>
> Key: HDFS-9027
> URL: https://issues.apache.org/jira/browse/HDFS-9027
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9027.000.patch
>
>
> In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} 
> class checks whether the HDFS file is lazy persist. It does two things:
> 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which 
> builds an array of {{BlockStoragePolicy}} internally
> 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by 
> policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}}
> This has two side effects:
> 1. Takes time to iterate the pre-built block storage policy array in order to 
> find the _same_ policy every time whose id matters only (as we need to 
> compare the file status policy id with lazy persist policy id)
> 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former 
> should be moved to {{hadoop-hdfs-client}} module, while the latter can stay 
> in {{hadoop-hdfs}} module.
> Actually, we have the block storage policy IDs, which can be used to compare 
> with HDFS file status' policy id, as following:
> {code}
> static boolean isLazyPersist(HdfsFileStatus stat) {
> return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID;
> }
> {code}
> This way, we only need to move the block storage policies' IDs from 
> {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} 
> ({{hadoop-hdfs-client}} module).
> Another reason we should move those block storage policy IDs is that the 
> block storage policy names were moved to {{HdfsConstants}} already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8860) Remove Replica hardlink / unlink code

2015-09-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8860:

Attachment: HDFS-8860.0.patch

This patch to remove all 'unlinkBlock()' code, since it is essentially disabled 
by always setting {{ReplicaInfo#unlinked}} to true.

> Remove Replica hardlink / unlink code
> -
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9019) Adding informative message to sticky bit permission denied exception

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735430#comment-14735430
 ] 

Hudson commented on HDFS-9019:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1094 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1094/])
HDFS-9019. Adding informative message to sticky bit permission denied 
exception. Contributed by Xiaoyu Yao. (xyao: rev 
970daaa5e44d3c09afd46d1c8e923a5096708c44)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Adding informative message to sticky bit permission denied exception
> 
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2015-09-08 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735500#comment-14735500
 ] 

Ming Ma commented on HDFS-9005:
---

Thanks [~cmccabe]. We could use XML or other file format. The script approach 
could allow admins to regularly add machines to the cluster without any 
additional operation to update NN with upgrade domain of these new machines. If 
we use the XML approach, we will need additional steps to keep NN up to date 
such as a) use the script to generate updated XML file for the new machines. b) 
have NN reload the XML file before it reloads dfs.hosts file. Sure these steps 
can be automated. It is just somewhat easier for admins to use the script.

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9029) libwebhdfs is not in the mvn package and likely missing from all distributions

2015-09-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9029.

Resolution: Duplicate

> libwebhdfs is not in the mvn package and likely missing from all distributions
> --
>
> Key: HDFS-9029
> URL: https://issues.apache.org/jira/browse/HDFS-9029
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> libwebhdfs is not in the tar.gz generated by maven.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8860) Remove Replica hardlink / unlink code

2015-09-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8860:

Affects Version/s: 2.8.0
   3.0.0
 Target Version/s: 3.0.0, 2.8.0  (was: 3.0.0)
   Status: Patch Available  (was: Open)

> Remove Replica hardlink / unlink code
> -
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2015-09-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735519#comment-14735519
 ] 

Colin Patrick McCabe commented on HDFS-9005:


With the script approach, how does the NameNode know when the upgrade domain of 
a DataNode has changed?  Does it poll the script periodically?

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9027:

Summary: Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method  (was: 
Refactor o.a.h.hdfs.DataStreamer::isLazyPersist() method)

> Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method
> ---
>
> Key: HDFS-9027
> URL: https://issues.apache.org/jira/browse/HDFS-9027
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9027.000.patch
>
>
> In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} 
> class checks whether the HDFS file is lazy persist. It does two things:
> 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which 
> builds an array of {{BlockStoragePolicy}} internally
> 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by 
> policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}}
> This has two side effects:
> 1. Takes time to iterate the pre-built block storage policy array in order to 
> find the _same_ policy every time whose id matters only (as we need to 
> compare the file status policy id with lazy persist policy id)
> 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former 
> should be moved to {{hadoop-hdfs-client}} module, while the latter can stay 
> in {{hadoop-hdfs}} module.
> Actually, we have the block storage policy IDs, which can be used to compare 
> with HDFS file status' policy id, as following:
> {code}
> static boolean isLazyPersist(HdfsFileStatus stat) {
> return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID;
> }
> {code}
> This way, we only need to move the block storage policies' IDs from 
> {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} 
> ({{hadoop-hdfs-client}} module).
> Another reason we should move those block storage policy IDs is that the 
> block storage policy names were moved to {{HdfsConstants}} already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer::isLazyPersist() method

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9027:

Summary: Refactor o.a.h.hdfs.DataStreamer::isLazyPersist() method  (was: 
Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method)

> Refactor o.a.h.hdfs.DataStreamer::isLazyPersist() method
> 
>
> Key: HDFS-9027
> URL: https://issues.apache.org/jira/browse/HDFS-9027
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9027.000.patch
>
>
> In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} 
> class checks whether the HDFS file is lazy persist. It does two things:
> 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which 
> builds an array of {{BlockStoragePolicy}} internally
> 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by 
> policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}}
> This has two side effects:
> 1. Takes time to iterate the pre-built block storage policy array in order to 
> find the _same_ policy every time whose id matters only (as we need to 
> compare the file status policy id with lazy persist policy id)
> 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former 
> should be moved to {{hadoop-hdfs-client}} module, while the latter can stay 
> in {{hadoop-hdfs}} module.
> Actually, we have the block storage policy IDs, which can be used to compare 
> with HDFS file status' policy id, as following:
> {code}
> static boolean isLazyPersist(HdfsFileStatus stat) {
> return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID;
> }
> {code}
> This way, we only need to move the block storage policies' IDs from 
> {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} 
> ({{hadoop-hdfs-client}} module).
> Another reason we should move those block storage policy IDs is that the 
> block storage policy names were moved to {{HdfsConstants}} already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9027:

Status: Open  (was: Patch Available)

> Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
> ---
>
> Key: HDFS-9027
> URL: https://issues.apache.org/jira/browse/HDFS-9027
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9027.000.patch
>
>
> In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} 
> class checks whether the HDFS file is lazy persist. It does two things:
> 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which 
> builds an array of {{BlockStoragePolicy}} internally
> 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by 
> policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}}
> This has two side effects:
> 1. Takes time to iterate the pre-built block storage policy array in order to 
> find the _same_ policy every time whose id matters only (as we need to 
> compare the file status policy id with lazy persist policy id)
> 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former 
> should be moved to {{hadoop-hdfs-client}} module, while the latter can stay 
> in {{hadoop-hdfs}} module.
> Actually, we have the block storage policy IDs, which can be used to compare 
> with HDFS file status' policy id, as following:
> {code}
> static boolean isLazyPersist(HdfsFileStatus stat) {
> return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID;
> }
> {code}
> This way, we only need to move the block storage policies' IDs from 
> {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} 
> ({{hadoop-hdfs-client}} module).
> Another reason we should move those block storage policy IDs is that the 
> block storage policy names were moved to {{HdfsConstants}} already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Status: Open  (was: Patch Available)

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Status: Patch Available  (was: Open)

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9010:

Status: Open  (was: Patch Available)

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8860) Remove Replica hardlink / unlink code

2015-09-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8860:

Attachment: HDFS-8860.0.patch

Re-attach the patch with a small typo fix.

> Remove Replica hardlink / unlink code
> -
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8860) Remove Replica hardlink / unlink code

2015-09-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8860:

Attachment: (was: HDFS-8860.0.patch)

> Remove Replica hardlink / unlink code
> -
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9019) Adding informative message to sticky bit permission denied exception

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735507#comment-14735507
 ] 

Hudson commented on HDFS-9019:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2306 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2306/])
HDFS-9019. Adding informative message to sticky bit permission denied 
exception. Contributed by Xiaoyu Yao. (xyao: rev 
970daaa5e44d3c09afd46d1c8e923a5096708c44)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java


> Adding informative message to sticky bit permission denied exception
> 
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-09-08 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated HDFS-8716:
---
Attachment: HDFS-8716.8.patch

> introduce a new config specifically for safe mode block count
> -
>
> Key: HDFS-8716
> URL: https://issues.apache.org/jira/browse/HDFS-8716
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
> HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch, HDFS-8716.7.patch, 
> HDFS-8716.7.patch, HDFS-8716.8.patch
>
>
> During the start up, namenode waits for n replicas of each block to be 
> reported by datanodes before exiting the safe mode. Currently n is tied to 
> the min replicas config. We could set min replicas to more than one but we 
> might want to exit safe mode as soon as each block has one replica reported. 
> This can be worked out by introducing a new config variable for safe mode 
> block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9030) libwebhdfs lacks headers, documentation; not part of mvn package

2015-09-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735528#comment-14735528
 ] 

Allen Wittenauer commented on HDFS-9030:


Merging HDFS-9029 into this.

Digging into this today, I realize what an INCREDIBLY short sighted thing we've 
done.

Basically, libhdfs and libwebhdfs are interchangeable at link time since they 
expose the exact same APIs.  On the surface, this appears like a great, 
convinent feature:

* hdfs.h is the same header for both libraries
* API calls are the same so documentation is the same.

However:
* Users CANNOT (effectively) link both libraries in the same program

So your program either does HDFS RPC or WebHDFS REST.  It cannot do both.  
Worse, neither library actually exposes all of the APIs that have been declared 
public.  (I'm looking at you, snapshots).

It might be a blessing in disguise that this hasn't been exposed very widely 
due to libwebhdfs not actually showing up in the tarballs.

> libwebhdfs lacks headers, documentation; not part of mvn package
> 
>
> Key: HDFS-9030
> URL: https://issues.apache.org/jira/browse/HDFS-9030
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> This library is useless without header files to include and documentation on 
> how to use it.  Both appear to be missing from the mvn package and site 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-09-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735487#comment-14735487
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8287:
---

Some bugs in my previous posted code:
- releaseBuf should be inside call().
- checkSumArrays should be assigned from cb.checksumArrays.
{code}
  void writeParityCells() throws IOException {
final CellBuffers cb = doubleCellBuffer.flip();
// Create parity packets asynchronously.
completionService.submit(new Callable() {
  @Override
  public Void call() throws Exception {
try {
  final ByteBuffer[] buffers = cb.getBuffers();
  final byte[][] checkSumArrays = cb.checksumArrays;
  encode(encoder, numDataBlocks, buffers);
  for (int i = numDataBlocks; i < numAllBlocks; i++) {
writeParity(i, buffers[i], checkSumArrays[i]);
  }
} finally {
  doubleCellBuffer.releaseBuf(cb);
}
return null;
  }
});
  }
{code}


> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9030) libwebhdfs lacks headers, documentation; not part of mvn package

2015-09-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-9030:
---
Summary: libwebhdfs lacks headers, documentation; not part of mvn package  
(was: libwebhdfs lacks headers and documentation)

> libwebhdfs lacks headers, documentation; not part of mvn package
> 
>
> Key: HDFS-9030
> URL: https://issues.apache.org/jira/browse/HDFS-9030
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> This library is useless without header files to include and documentation on 
> how to use it.  Both appear to be missing from the mvn package and site 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-09-08 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735514#comment-14735514
 ] 

Chang Li commented on HDFS-8716:


Thanks [~kihwal] for review! Updated and submitted .8 patch. Should solve the 
patch apply issue.

> introduce a new config specifically for safe mode block count
> -
>
> Key: HDFS-8716
> URL: https://issues.apache.org/jira/browse/HDFS-8716
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
> HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch, HDFS-8716.7.patch, 
> HDFS-8716.7.patch, HDFS-8716.8.patch
>
>
> During the start up, namenode waits for n replicas of each block to be 
> reported by datanodes before exiting the safe mode. Currently n is tied to 
> the min replicas config. We could set min replicas to more than one but we 
> might want to exit safe mode as soon as each block has one replica reported. 
> This can be worked out by introducing a new config variable for safe mode 
> block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9008) Balancer#Parameters class could use a builder pattern

2015-09-08 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated HDFS-9008:
---
Attachment: HDFS-9008-trunk-v3.patch

Thanks [~szetszwo]. Attached is a V3 that keeps everything final. It also fixes 
the checkstyle warning.

> Balancer#Parameters class could use a builder pattern
> -
>
> Key: HDFS-9008
> URL: https://issues.apache.org/jira/browse/HDFS-9008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: HDFS-9008-trunk-v1.patch, HDFS-9008-trunk-v2.patch, 
> HDFS-9008-trunk-v3.patch
>
>
> The Balancer#Parameters class is violating a few checkstyle rules.
> # Instance variables are not privately scoped and do not have accessor 
> methods.
> # The Balancer#Parameter constructor has too many arguments (according to 
> checkstyle).
> Changing this class to use the builder pattern could fix both of these style 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-09-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735458#comment-14735458
 ] 

Kihwal Lee commented on HDFS-8716:
--

It looks okay. Kicking the precommit build to make sure the patch didn't go 
stale.

> introduce a new config specifically for safe mode block count
> -
>
> Key: HDFS-8716
> URL: https://issues.apache.org/jira/browse/HDFS-8716
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
> HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch, HDFS-8716.7.patch, 
> HDFS-8716.7.patch
>
>
> During the start up, namenode waits for n replicas of each block to be 
> reported by datanodes before exiting the safe mode. Currently n is tied to 
> the min replicas config. We could set min replicas to more than one but we 
> might want to exit safe mode as soon as each block has one replica reported. 
> This can be worked out by introducing a new config variable for safe mode 
> block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735460#comment-14735460
 ] 

Hadoop QA commented on HDFS-8716:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745376/HDFS-8716.7.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 970daaa |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12344/console |


This message was automatically generated.

> introduce a new config specifically for safe mode block count
> -
>
> Key: HDFS-8716
> URL: https://issues.apache.org/jira/browse/HDFS-8716
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
> HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch, HDFS-8716.7.patch, 
> HDFS-8716.7.patch
>
>
> During the start up, namenode waits for n replicas of each block to be 
> reported by datanodes before exiting the safe mode. Currently n is tied to 
> the min replicas config. We could set min replicas to more than one but we 
> might want to exit safe mode as soon as each block has one replica reported. 
> This can be worked out by introducing a new config variable for safe mode 
> block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9019) Adding informative message to sticky bit permission denied exception

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735459#comment-14735459
 ] 

Hudson commented on HDFS-9019:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #356 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/356/])
HDFS-9019. Adding informative message to sticky bit permission denied 
exception. Contributed by Xiaoyu Yao. (xyao: rev 
970daaa5e44d3c09afd46d1c8e923a5096708c44)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java


> Adding informative message to sticky bit permission denied exception
> 
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9030) libwebhdfs lacks headers, documentation; not part of mvn package

2015-09-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735575#comment-14735575
 ] 

Allen Wittenauer commented on HDFS-9030:


I'm leaning towards:

* Making the hdfsX calls webhdfsX calls
* Add the missing routines into webhdfs
* Adding a libwebhdfs2hdfs library that does the interposing for those that 
still need it.

(Altho for my current project, just go use something else, because this is just 
a mess.)

> libwebhdfs lacks headers, documentation; not part of mvn package
> 
>
> Key: HDFS-9030
> URL: https://issues.apache.org/jira/browse/HDFS-9030
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> This library is useless without header files to include and documentation on 
> how to use it.  Both appear to be missing from the mvn package and site 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8929) Add a metric to expose the timestamp of the last journal

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735680#comment-14735680
 ] 

Hadoop QA commented on HDFS-8929:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  22m 18s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 58s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 19s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 58s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 162m 36s | Tests failed in hadoop-hdfs. |
| | | 237m 49s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754679/HDFS-8929-004.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / 970daaa |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12342/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12342/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12342/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12342/console |


This message was automatically generated.

> Add a metric to expose the timestamp of the last journal
> 
>
> Key: HDFS-8929
> URL: https://issues.apache.org/jira/browse/HDFS-8929
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: journal-node
>Reporter: Akira AJISAKA
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8929-001.patch, HDFS-8929-002.patch, 
> HDFS-8929-003.patch, HDFS-8929-004.patch
>
>
> If there are three JNs and only one JN is failing to journal, we can detect 
> it by monitoring the difference of the last written transaction id among JNs 
> from NN WebUI or JN metrics. However, it's difficult to define the threshold 
> to alert because the increase rate of the number of transaction depends on 
> how busy the cluster is. Therefore I'd like to propose a metric to expose the 
> timestamp of the last journal. That way we can easily alert if a JN is 
> failing to journal for some fixed period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9037) Trash messages should be handled by Logger instead of being delivered on System.out

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9037:

Attachment: HDFS-9037.000.patch

> Trash messages should be handled by Logger instead of being delivered on 
> System.out 
> 
>
> Key: HDFS-9037
> URL: https://issues.apache.org/jira/browse/HDFS-9037
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.6.0
>Reporter: Ashutosh Chauhan
>Assignee: Mingliang Liu
> Attachments: HDFS-9037.000.patch
>
>
> Specifically,
> {code}
>   if (success) {
>   System.out.println("Moved: '" + p + "' to trash at: " +
>   trash.getCurrentTrashDir() );
> }
> {code}
> should be:
> {code}
>   if (success) {
>   LOG.info("Moved: '" + p + "' to trash at: " +
>   trash.getCurrentTrashDir() );
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9019) Adding informative message to sticky bit permission denied exception

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735568#comment-14735568
 ] 

Hudson commented on HDFS-9019:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #344 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/344/])
HDFS-9019. Adding informative message to sticky bit permission denied 
exception. Contributed by Xiaoyu Yao. (xyao: rev 
970daaa5e44d3c09afd46d1c8e923a5096708c44)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Adding informative message to sticky bit permission denied exception
> 
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9037) Trash messages should be handled by Logger instead of being delivered on System.out

2015-09-08 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HDFS-9037:
--

 Summary: Trash messages should be handled by Logger instead of 
being delivered on System.out 
 Key: HDFS-9037
 URL: https://issues.apache.org/jira/browse/HDFS-9037
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: logging
Affects Versions: 2.6.0
Reporter: Ashutosh Chauhan


Specifically,
{code}
  if (success) {
  System.out.println("Moved: '" + p + "' to trash at: " +
  trash.getCurrentTrashDir() );
}
{code}

should be:
{code}
  if (success) {
  LOG.info("Moved: '" + p + "' to trash at: " +
  trash.getCurrentTrashDir() );
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9037) Trash messages should be handled by Logger instead of being delivered on System.out

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-9037:
---

Assignee: Mingliang Liu

> Trash messages should be handled by Logger instead of being delivered on 
> System.out 
> 
>
> Key: HDFS-9037
> URL: https://issues.apache.org/jira/browse/HDFS-9037
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.6.0
>Reporter: Ashutosh Chauhan
>Assignee: Mingliang Liu
>
> Specifically,
> {code}
>   if (success) {
>   System.out.println("Moved: '" + p + "' to trash at: " +
>   trash.getCurrentTrashDir() );
> }
> {code}
> should be:
> {code}
>   if (success) {
>   LOG.info("Moved: '" + p + "' to trash at: " +
>   trash.getCurrentTrashDir() );
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9038) Reserved space is erroneously counted towards non-DFS used.

2015-09-08 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-9038:
---

 Summary: Reserved space is erroneously counted towards non-DFS 
used.
 Key: HDFS-9038
 URL: https://issues.apache.org/jira/browse/HDFS-9038
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Chris Nauroth


HDFS-5215 changed the DataNode volume available space calculation to consider 
the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
property.  As a side effect, reserved space is now counted towards non-DFS 
used.  I don't believe it was intentional to change the definition of non-DFS 
used.  This issue proposes restoring the prior behavior: do not count reserved 
space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9019) Adding informative message to sticky bit permission denied exception

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735605#comment-14735605
 ] 

Hudson commented on HDFS-9019:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2283/])
HDFS-9019. Adding informative message to sticky bit permission denied 
exception. Contributed by Xiaoyu Yao. (xyao: rev 
970daaa5e44d3c09afd46d1c8e923a5096708c44)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Adding informative message to sticky bit permission denied exception
> 
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9037) Trash messages should be handled by Logger instead of being delivered on System.out

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9037:

Description: 
Specifically,
{code}
  if (success) {
  System.out.println("Moved: '" + p + "' to trash at: " +
  trash.getCurrentTrashDir() );
}
{code}

should be:
{code}
  if (success) {
  LOG.info("Moved: '" + p + "' to trash at: " + trash.getCurrentTrashDir());
}
{code}

  was:
Specifically,
{code}
  if (success) {
  System.out.println("Moved: '" + p + "' to trash at: " +
  trash.getCurrentTrashDir() );
}
{code}

should be:
{code}
  if (success) {
  LOG.info("Moved: '" + p + "' to trash at: " +
  trash.getCurrentTrashDir() );
}
{code}


> Trash messages should be handled by Logger instead of being delivered on 
> System.out 
> 
>
> Key: HDFS-9037
> URL: https://issues.apache.org/jira/browse/HDFS-9037
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.6.0
>Reporter: Ashutosh Chauhan
>Assignee: Mingliang Liu
> Attachments: HDFS-9037.000.patch
>
>
> Specifically,
> {code}
>   if (success) {
>   System.out.println("Moved: '" + p + "' to trash at: " +
>   trash.getCurrentTrashDir() );
> }
> {code}
> should be:
> {code}
>   if (success) {
>   LOG.info("Moved: '" + p + "' to trash at: " + 
> trash.getCurrentTrashDir());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6763) Initialize file system-wide quota once on transitioning to active

2015-09-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6763:
-
Attachment: HDFS-6763.v3.patch

Removed more imports to fix checkstyle warnings. Failed unit tests all pass.

> Initialize file system-wide quota once on transitioning to active
> -
>
> Key: HDFS-6763
> URL: https://issues.apache.org/jira/browse/HDFS-6763
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6763.patch, HDFS-6763.v2.patch, HDFS-6763.v3.patch
>
>
> {{FSImage#loadEdits}} calls {{updateCountForQuota}} to recalculate & verify 
> quotas for the entire namespace.  A standby NN using shared edits calls this 
> method every minute.  The standby may appear to "hang" for many seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not considered while computing available space

2015-09-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735748#comment-14735748
 ] 

Chris Nauroth commented on HDFS-5215:
-

I filed HDFS-9038 to propose restoring the prior definition of non-DFS used.

> dfs.datanode.du.reserved is not considered while computing available space
> --
>
> Key: HDFS-5215
> URL: https://issues.apache.org/jira/browse/HDFS-5215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.7.1
>
> Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
> HDFS-5215-004.patch, HDFS-5215-005.patch, HDFS-5215.patch
>
>
> {code}public long getAvailable() throws IOException {
> long remaining = getCapacity()-getDfsUsed();
> long available = usage.getAvailable();
> if (remaining > available) {
>   remaining = available;
> }
> return (remaining > 0) ? remaining : 0;
>   } 
> {code}
> Here we are not considering the reserved space while getting the Available 
> Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9038) Reserved space is erroneously counted towards non-DFS used.

2015-09-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735746#comment-14735746
 ] 

Chris Nauroth commented on HDFS-9038:
-

I'd appreciate if some of the participants on HDFS-5215 could chime in with 
their opinions.  cc [~brahmareddy], [~umamaheswararao], [~yzhangal] and 
[~kihwal].  Thank you.

> Reserved space is erroneously counted towards non-DFS used.
> ---
>
> Key: HDFS-9038
> URL: https://issues.apache.org/jira/browse/HDFS-9038
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>
> HDFS-5215 changed the DataNode volume available space calculation to consider 
> the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
> property.  As a side effect, reserved space is now counted towards non-DFS 
> used.  I don't believe it was intentional to change the definition of non-DFS 
> used.  This issue proposes restoring the prior behavior: do not count 
> reserved space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6763) Initialize file system-wide quota once on transitioning to active

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735634#comment-14735634
 ] 

Hadoop QA commented on HDFS-6763:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 16s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 15s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  6 
new checkstyle issues (total was 359, now 364). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 10s | Tests failed in hadoop-hdfs. |
| | | 209m 31s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754680/HDFS-6763.v2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 970daaa |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12341/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12341/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12341/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12341/console |


This message was automatically generated.

> Initialize file system-wide quota once on transitioning to active
> -
>
> Key: HDFS-6763
> URL: https://issues.apache.org/jira/browse/HDFS-6763
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6763.patch, HDFS-6763.v2.patch
>
>
> {{FSImage#loadEdits}} calls {{updateCountForQuota}} to recalculate & verify 
> quotas for the entire namespace.  A standby NN using shared edits calls this 
> method every minute.  The standby may appear to "hang" for many seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8860) Remove Replica hardlink / unlink code

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735741#comment-14735741
 ] 

Hadoop QA commented on HDFS-8860:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 45s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  7 
new checkstyle issues (total was 159, now 162). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 28s | Tests failed in hadoop-hdfs. |
| | | 207m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754702/HDFS-8860.0.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 970daaa |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12343/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12343/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12343/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12343/console |


This message was automatically generated.

> Remove Replica hardlink / unlink code
> -
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9036) In BlockPlacementPolicyWithNodeGroup#chooseLocalStorage , random node is selected eventhough fallbackToLocalRack is true.

2015-09-08 Thread J.Andreina (JIRA)
J.Andreina created HDFS-9036:


 Summary: In BlockPlacementPolicyWithNodeGroup#chooseLocalStorage , 
random node is selected eventhough fallbackToLocalRack is true.
 Key: HDFS-9036
 URL: https://issues.apache.org/jira/browse/HDFS-9036
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina


For example in below clusterMap

Writer = "/d2/r4/n8" which does not have local node.
Available nodes rack = {"/d2/r4/n7","/d1/r1/n1","/d1/r2/n3","/d2/r3/n6"}

Current hierarchy in choosing node for first local storage replica is
1. Choose local machine (Not available)
2. Choose Local node group machine (Not available)
3. choose random

*But instead of choosing random , should choose localrack node first(if 
fallbackToLocalRack is true, in example it is "/d2/r4/n7" ) else should go for 
radom node*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8698) Add "-direct" flag option for fs copy so that user can choose not to create "._COPYING_" file

2015-09-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734786#comment-14734786
 ] 

Vinayakumar B commented on HDFS-8698:
-

+1. Changes looks good.

> Add "-direct" flag option for fs copy so that user can choose not to create 
> "._COPYING_" file
> -
>
> Key: HDFS-8698
> URL: https://issues.apache.org/jira/browse/HDFS-8698
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Chen He
>Assignee: J.Andreina
> Attachments: HDFS-8698.1.patch, HDFS-8698.2.patch, HDFS-8698.3.patch
>
>
> Because CLI is using CommandWithDestination.java which add "._COPYING_" to 
> the tail of file name when it does the copy. For blobstore like S3 and Swift, 
> to create "._COPYING_" file and rename it is expensive. "-direct" flag can 
> allow user to avoiding the "._COPYING_" file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8860) Remove unused Replica copyOnWrite code

2015-09-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8860:
---
Summary: Remove unused Replica copyOnWrite code  (was: Remove Replica 
hardlink / unlink code)

> Remove unused Replica copyOnWrite code
> --
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7446) HDFS inotify should have the ability to determine what txid it has read up to

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7446:
--
Fix Version/s: 2.6.1

Okay, I pulled this into 2.6.1 given the class was marked public unstable (it 
is marked so even now on branch-2!) and given your comments above.

The patch didn't apply cleanly, there were import and merge conflicts. Fixed 
them, ran compilation and TestDFSInotifyEventInputStream before the push.

> HDFS inotify should have the ability to determine what txid it has read up to
> -
>
> Key: HDFS-7446
> URL: https://issues.apache.org/jira/browse/HDFS-7446
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-7446.001.patch, HDFS-7446.002.patch, 
> HDFS-7446.003.patch
>
>
> HDFS inotify should have the ability to determine what txid it has read up 
> to.  This will allow users who want to avoid missing any events to record 
> this txid and use it to resume reading events at the spot they left off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7446) HDFS inotify should have the ability to determine what txid it has read up to

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7446:
--
Attachment: HDFS-7446-branch-2.6.1.txt

Attaching patch that I committed to 2.6.1.

> HDFS inotify should have the ability to determine what txid it has read up to
> -
>
> Key: HDFS-7446
> URL: https://issues.apache.org/jira/browse/HDFS-7446
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-7446-branch-2.6.1.txt, HDFS-7446.001.patch, 
> HDFS-7446.002.patch, HDFS-7446.003.patch
>
>
> HDFS inotify should have the ability to determine what txid it has read up 
> to.  This will allow users who want to avoid missing any events to record 
> this txid and use it to resume reading events at the spot they left off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7843) A truncated file is corrupted after rollback from a rolling upgrade

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7843:
--
Labels: 2.6.1-candidate  (was: 2.6.1)

> A truncated file is corrupted after rollback from a rolling upgrade
> ---
>
> Key: HDFS-7843
> URL: https://issues.apache.org/jira/browse/HDFS-7843
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Blocker
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0
>
> Attachments: h7843_20150226.patch
>
>
> Here is a rolling upgrade truncate test from [~brandonli].  The basic test 
> step is: (3 nodes cluster with HA)
> 1. upload a file to hdfs
> 2. start rollingupgrade. finish rollingupgrade for namenode and one datanode. 
> 3. truncate the file in hdfs to 1byte
> 4. do rollback
> 5. download file from hdfs, check file size to be original size
> I see the file size in hdfs is correct but can't read it because the block is 
> corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735775#comment-14735775
 ] 

Jing Zhao commented on HDFS-9011:
-

Thanks for the review, Nicholas and Yi!

bq. for each partial report rpc, NN calls reportDiff(..) but reportDiff(..) 
assumes full block report. 

Yeah, this is a big issue here. The current reportDiff assumes the block report 
contains all the blocks in the storage thus removes all the blocks after the 
delimiter block. We can record the last block in the previous block report for 
the same storage as a cookie, but we cannot guarantee there is no block change 
happening during the two block report RPCs. For example, the cookie block may 
be deleted during the two reports. Thus looks like it is very hard to continue 
the reportDiff process across two FBR RPC, unless we link all the blocks for 
each storage in a specific order.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9008) Balancer#Parameters class could use a builder pattern

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735783#comment-14735783
 ] 

Hadoop QA commented on HDFS-9008:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 52s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   8m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 55s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 21s | The applied patch generated  
16 new checkstyle issues (total was 48, now 54). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  9s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 16s | Tests failed in hadoop-hdfs. |
| | | 206m 39s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754709/HDFS-9008-trunk-v3.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 16b9037 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12345/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12345/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12345/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12345/console |


This message was automatically generated.

> Balancer#Parameters class could use a builder pattern
> -
>
> Key: HDFS-9008
> URL: https://issues.apache.org/jira/browse/HDFS-9008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: HDFS-9008-trunk-v1.patch, HDFS-9008-trunk-v2.patch, 
> HDFS-9008-trunk-v3.patch
>
>
> The Balancer#Parameters class is violating a few checkstyle rules.
> # Instance variables are not privately scoped and do not have accessor 
> methods.
> # The Balancer#Parameter constructor has too many arguments (according to 
> checkstyle).
> Changing this class to use the builder pattern could fix both of these style 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8846) Add a unit test for INotify functionality across a layout version upgrade

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8846:
--
   Labels: 2.6.1-candidate 2.7.2-candidate  (was: )
Fix Version/s: 2.6.1

Pulled this as a companion to HDFS-8480 into 2.6.1.

Fixed minor merge conflicts. Ran compilation and 
TestDFSInotifyEventInputStream, TestDFSUpgrade, TestDFSUpgradeFromImage before 
the push.

> Add a unit test for INotify functionality across a layout version upgrade
> -
>
> Key: HDFS-8846
> URL: https://issues.apache.org/jira/browse/HDFS-8846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: 2.6.1-candidate, 2.7.2-candidate
> Fix For: 2.6.1, 2.8.0
>
> Attachments: HDFS-8846-branch-2.6.1.txt, HDFS-8846.00.patch, 
> HDFS-8846.01.patch, HDFS-8846.02.patch, HDFS-8846.03.patch
>
>
> Per discussion under HDFS-8480, we should create some edit log files with old 
> layout version, to test whether they can be correctly handled in upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8846) Add a unit test for INotify functionality across a layout version upgrade

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8846:
--
Attachment: HDFS-8846-branch-2.6.1.txt

Attaching patch that I committed to 2.6.1.

> Add a unit test for INotify functionality across a layout version upgrade
> -
>
> Key: HDFS-8846
> URL: https://issues.apache.org/jira/browse/HDFS-8846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: 2.6.1-candidate, 2.7.2-candidate
> Fix For: 2.6.1, 2.8.0
>
> Attachments: HDFS-8846-branch-2.6.1.txt, HDFS-8846.00.patch, 
> HDFS-8846.01.patch, HDFS-8846.02.patch, HDFS-8846.03.patch
>
>
> Per discussion under HDFS-8480, we should create some edit log files with old 
> layout version, to test whether they can be correctly handled in upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735844#comment-14735844
 ] 

Vinod Kumar Vavilapalli commented on HDFS-7929:
---

[~sjlee0] / [~zhz], never mind, I pulled in HDFS-8846 too into 2.6.1.

> inotify unable fetch pre-upgrade edit log segments once upgrade starts
> --
>
> Key: HDFS-7929
> URL: https://issues.apache.org/jira/browse/HDFS-7929
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-7929-000.patch, HDFS-7929-001.patch, 
> HDFS-7929-002.patch, HDFS-7929-003.patch
>
>
> inotify is often used to periodically poll HDFS events. However, once an HDFS 
> upgrade has started, edit logs are moved to /previous on the NN, which is not 
> accessible. Moreover, once the upgrade is finalized /previous is currently 
> lost forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735850#comment-14735850
 ] 

Hadoop QA commented on HDFS-8716:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 56s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  1 
new checkstyle issues (total was 642, now 642). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 13s | Tests failed in hadoop-hdfs. |
| | | 207m 19s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754722/HDFS-8716.8.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d9c1fab |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12346/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12346/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12346/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12346/console |


This message was automatically generated.

> introduce a new config specifically for safe mode block count
> -
>
> Key: HDFS-8716
> URL: https://issues.apache.org/jira/browse/HDFS-8716
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
> HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch, HDFS-8716.7.patch, 
> HDFS-8716.7.patch, HDFS-8716.8.patch
>
>
> During the start up, namenode waits for n replicas of each block to be 
> reported by datanodes before exiting the safe mode. Currently n is tied to 
> the min replicas config. We could set min replicas to more than one but we 
> might want to exit safe mode as soon as each block has one replica reported. 
> This can be worked out by introducing a new config variable for safe mode 
> block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8384:
--
   Labels: 2.6.1-candidate  (was: )
Fix Version/s: 2.6.1

Right when I was about to start closing down the release!

Pulled this into 2.6.1. Ran compilation before the push.

> Allow NN to startup if there are files having a lease but are not under 
> construction
> 
>
> Key: HDFS-8384
> URL: https://issues.apache.org/jira/browse/HDFS-8384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Jing Zhao
>Priority: Minor
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.8.0, 2.7.2
>
> Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, 
> HDFS-8384.000.patch
>
>
> When there are files having a lease but are not under construction, NN will 
> fail to start up with
> {code}
> 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for 
> /hadoop/hdfs/namenode
> java.lang.IllegalStateException
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124)
> ...
> {code}
> The actually problem is that the image could be corrupted by bugs like 
> HDFS-7587.  We should have an option/conf to allow NN to start up so that the 
> problematic files could possibly be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8860) Remove Replica hardlink / unlink code

2015-09-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735768#comment-14735768
 ] 

Colin Patrick McCabe commented on HDFS-8860:


It looks like this code was originally added in HADOOP-2655.  Since the new 
append implementation, it has no purpose any more.

+1 pending Jenkins

> Remove Replica hardlink / unlink code
> -
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8996) Remove redundant scanEditLog code

2015-09-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reassigned HDFS-8996:
---

Assignee: Zhe Zhang

> Remove redundant scanEditLog code
> -
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9039) Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively

2015-09-08 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9039:
---

 Summary: Split o.a.h.hdfs.NameNodeProxies class into two classes 
in hadoop-hdfs-client and hadoop-hdfs modules respectively
 Key: HDFS-9039
 URL: https://issues.apache.org/jira/browse/HDFS-9039
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Mingliang Liu
Assignee: Mingliang Liu


Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by both 
{{org.apache.hadoop.hdfs.server}} package (for server side protocols) and 
{{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class should be 
moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
https://issues.apache.org/jira/browse/HDFS-8053]). As the 
{{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't simply 
move this class to the {{hadoo-hdfs-client}} module as well.

This jira tracks the effort of moving {{ClientProtocol}} related static methods 
in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to {{hadoo-hdfs-client}} 
module. A good place to put these static methods is a new class named 
{{NameNodeProxiesClient}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8480) Fix performance and timeout issues in HDFS-7929 by using hard-links to preserve old edit logs instead of copying them

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8480:
--
Fix Version/s: 2.6.1

Pulled this into 2.6.1. As Ming mentioned above, the test fails after this 
patch, so pulled in HDFS-8846 too.

Had to fix merge and import conflicts, and make some minor change to reflect 
2.6.1. Ran compilation before the push.

> Fix performance and timeout issues in HDFS-7929 by using hard-links to 
> preserve old edit logs instead of copying them
> -
>
> Key: HDFS-8480
> URL: https://issues.apache.org/jira/browse/HDFS-8480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.1
>
> Attachments: HDFS-8480.00.patch, HDFS-8480.01.patch, 
> HDFS-8480.02.patch, HDFS-8480.03.patch
>
>
> HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
> {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
> hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8480) Fix performance and timeout issues in HDFS-7929 by using hard-links to preserve old edit logs instead of copying them

2015-09-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8480:
--
Attachment: HDFS-8480-branch-2.6.1.txt

Attaching patch that I committed to 2.6.1.

> Fix performance and timeout issues in HDFS-7929 by using hard-links to 
> preserve old edit logs instead of copying them
> -
>
> Key: HDFS-8480
> URL: https://issues.apache.org/jira/browse/HDFS-8480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.1
>
> Attachments: HDFS-8480-branch-2.6.1.txt, HDFS-8480.00.patch, 
> HDFS-8480.01.patch, HDFS-8480.02.patch, HDFS-8480.03.patch
>
>
> HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
> {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
> hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6763) Initialize file system-wide quota once on transitioning to active

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735955#comment-14735955
 ] 

Hadoop QA commented on HDFS-6763:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 45s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  1 
new checkstyle issues (total was 359, now 359). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  7s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 14s | Tests failed in hadoop-hdfs. |
| | | 207m 20s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754749/HDFS-6763.v3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d9c1fab |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12347/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12347/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12347/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12347/console |


This message was automatically generated.

> Initialize file system-wide quota once on transitioning to active
> -
>
> Key: HDFS-6763
> URL: https://issues.apache.org/jira/browse/HDFS-6763
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6763.patch, HDFS-6763.v2.patch, HDFS-6763.v3.patch
>
>
> {{FSImage#loadEdits}} calls {{updateCountForQuota}} to recalculate & verify 
> quotas for the entire namespace.  A standby NN using shared edits calls this 
> method every minute.  The standby may appear to "hang" for many seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8860) Remove unused Replica copyOnWrite code

2015-09-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8860:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to 2.8.  Thanks, [~eddyxu].

> Remove unused Replica copyOnWrite code
> --
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.8.0
>
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8860) Remove unused Replica copyOnWrite code

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735980#comment-14735980
 ] 

Hudson commented on HDFS-8860:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8418 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8418/])
HDFS-8860. Remove unused Replica copyOnWrite code (Lei (Eddy) Xu via Colin P. 
McCabe) (cmccabe: rev a153b9601ad8628fdd608d8696310ca8c1f58ff0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaWaitingToBeRecovered.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java


> Remove unused Replica copyOnWrite code
> --
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.8.0
>
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8936) Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message

2015-09-08 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735997#comment-14735997
 ] 

Walter Su commented on HDFS-8936:
-

bq. moving locateFollowingBlock to OutputStream level.
bq. We can also consider moving the logic of bumping genStamp to OutputStream 
level.
It's not a good idea. As I said before, OutputStream and streamer have 
different roles to play. I've created HDFS-9040 to illustrate the idea of 
{{BlockGroupDataStreamer}}

> Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message
> -
>
> Key: HDFS-8936
> URL: https://issues.apache.org/jira/browse/HDFS-8936
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: EC space quota.log, ECSpcaeQuota-20150831.log, 
> ECSpcaeQuota-20150907.log, None EC(Replication) space quota.log
>
>
> When a EC directory exceed DiskSpace quota, the error message is along with 
> DFSStripedOutputStream inner exception message. Error messages should be as 
> simple and clear as normal hdfs directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9040) Erasure coding: A BlockGroupDataStreamer to rule all internal blocks streamers

2015-09-08 Thread Walter Su (JIRA)
Walter Su created HDFS-9040:
---

 Summary: Erasure coding: A BlockGroupDataStreamer to rule all 
internal blocks streamers
 Key: HDFS-9040
 URL: https://issues.apache.org/jira/browse/HDFS-9040
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


A {{BlockGroupDataStreamer}} to communicate with NN to allocate/update block, 
and {{StripedDataStreamer}} s only have to stream blocks to DNs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-09-08 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735922#comment-14735922
 ] 

Kai Sasaki commented on HDFS-8287:
--

[~szetszwo] Thank you so much. I'll check that!

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8860) Remove unused Replica copyOnWrite code

2015-09-08 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735993#comment-14735993
 ] 

Lei (Eddy) Xu commented on HDFS-8860:
-

Thanks a lot for the reviews, [~cmccabe].

> Remove unused Replica copyOnWrite code
> --
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.8.0
>
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not considered while computing available space

2015-09-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736004#comment-14736004
 ] 

Brahma Reddy Battula commented on HDFS-5215:


Sorry for coming late..Yes, it was not intentional..
{quote}My opinion is that the definition of non-DFS used should not have 
changed.{quote}
agree with you.

Let's followup in HDFS-9038.

> dfs.datanode.du.reserved is not considered while computing available space
> --
>
> Key: HDFS-5215
> URL: https://issues.apache.org/jira/browse/HDFS-5215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.7.1
>
> Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
> HDFS-5215-004.patch, HDFS-5215-005.patch, HDFS-5215.patch
>
>
> {code}public long getAvailable() throws IOException {
> long remaining = getCapacity()-getDfsUsed();
> long available = usage.getAvailable();
> if (remaining > available) {
>   remaining = available;
> }
> return (remaining > 0) ? remaining : 0;
>   } 
> {code}
> Here we are not considering the reserved space while getting the Available 
> Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8936) Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message

2015-09-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736082#comment-14736082
 ] 

Zhe Zhang commented on HDFS-8936:
-

[~walter.k.su] What I'd like us to do is basically to move the NN-related 
logics ({{locateFollowingBlock}} and {{bumpGenStamp}}) *up* to a coordinated 
level (either OutputStream or Coordinator or a new class). If creating a 
{{BlockGroupDataStreamer}} class is easier than refactoring {{DFSOutputStream}} 
then I'm +1 on the idea.

> Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message
> -
>
> Key: HDFS-8936
> URL: https://issues.apache.org/jira/browse/HDFS-8936
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: EC space quota.log, ECSpcaeQuota-20150831.log, 
> ECSpcaeQuota-20150907.log, None EC(Replication) space quota.log
>
>
> When a EC directory exceed DiskSpace quota, the error message is along with 
> DFSStripedOutputStream inner exception message. Error messages should be as 
> simple and clear as normal hdfs directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9010:

Status: Open  (was: Patch Available)

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9027:

Status: Open  (was: Patch Available)

> Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method
> ---
>
> Key: HDFS-9027
> URL: https://issues.apache.org/jira/browse/HDFS-9027
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9027.000.patch
>
>
> In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} 
> class checks whether the HDFS file is lazy persist. It does two things:
> 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which 
> builds an array of {{BlockStoragePolicy}} internally
> 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by 
> policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}}
> This has two side effects:
> 1. Takes time to iterate the pre-built block storage policy array in order to 
> find the _same_ policy every time whose id matters only (as we need to 
> compare the file status policy id with lazy persist policy id)
> 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former 
> should be moved to {{hadoop-hdfs-client}} module, while the latter can stay 
> in {{hadoop-hdfs}} module.
> Actually, we have the block storage policy IDs, which can be used to compare 
> with HDFS file status' policy id, as following:
> {code}
> static boolean isLazyPersist(HdfsFileStatus stat) {
> return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID;
> }
> {code}
> This way, we only need to move the block storage policies' IDs from 
> {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} 
> ({{hadoop-hdfs-client}} module).
> Another reason we should move those block storage policy IDs is that the 
> block storage policy names were moved to {{HdfsConstants}} already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Status: Open  (was: Patch Available)

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9027:

Status: Patch Available  (was: Open)

> Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method
> ---
>
> Key: HDFS-9027
> URL: https://issues.apache.org/jira/browse/HDFS-9027
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9027.000.patch
>
>
> In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} 
> class checks whether the HDFS file is lazy persist. It does two things:
> 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which 
> builds an array of {{BlockStoragePolicy}} internally
> 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by 
> policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}}
> This has two side effects:
> 1. Takes time to iterate the pre-built block storage policy array in order to 
> find the _same_ policy every time whose id matters only (as we need to 
> compare the file status policy id with lazy persist policy id)
> 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former 
> should be moved to {{hadoop-hdfs-client}} module, while the latter can stay 
> in {{hadoop-hdfs}} module.
> Actually, we have the block storage policy IDs, which can be used to compare 
> with HDFS file status' policy id, as following:
> {code}
> static boolean isLazyPersist(HdfsFileStatus stat) {
> return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID;
> }
> {code}
> This way, we only need to move the block storage policies' IDs from 
> {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} 
> ({{hadoop-hdfs-client}} module).
> Another reason we should move those block storage policy IDs is that the 
> block storage policy names were moved to {{HdfsConstants}} already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9010:

Status: Patch Available  (was: Open)

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8860) Remove unused Replica copyOnWrite code

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736090#comment-14736090
 ] 

Hudson commented on HDFS-8860:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1097 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1097/])
HDFS-8860. Remove unused Replica copyOnWrite code (Lei (Eddy) Xu via Colin P. 
McCabe) (cmccabe: rev a153b9601ad8628fdd608d8696310ca8c1f58ff0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaWaitingToBeRecovered.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove unused Replica copyOnWrite code
> --
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.8.0
>
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8688) replace shouldCheckForEnoughRacks with hasClusterEverBeenMultiRack

2015-09-08 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736097#comment-14736097
 ] 

Ming Ma commented on HDFS-8688:
---

[~walter.k.su] appreciate your effort!

> replace shouldCheckForEnoughRacks with hasClusterEverBeenMultiRack
> --
>
> Key: HDFS-8688
> URL: https://issues.apache.org/jira/browse/HDFS-8688
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8688.01.patch, HDFS-8688.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8860) Remove unused Replica copyOnWrite code

2015-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736128#comment-14736128
 ] 

Hudson commented on HDFS-8860:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #366 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/366/])
HDFS-8860. Remove unused Replica copyOnWrite code (Lei (Eddy) Xu via Colin P. 
McCabe) (cmccabe: rev a153b9601ad8628fdd608d8696310ca8c1f58ff0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaWaitingToBeRecovered.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> Remove unused Replica copyOnWrite code
> --
>
> Key: HDFS-8860
> URL: https://issues.apache.org/jira/browse/HDFS-8860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.8.0
>
> Attachments: HDFS-8860.0.patch
>
>
> {{ReplicaInfo#unlinkBlock()}} is effectively disabled by the following code, 
> because {{isUnlinked()}} always returns true.
> {code}
> if (isUnlinked()) {
>   return false;
> }
> {code}
> Several test cases, e.g., {{TestFileAppend#testCopyOnWrite}} and 
> {{TestDatanodeRestart#testRecoverReplicas}} are testing against the unlink 
> Lets remove the relevant code to eliminate the confusions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736017#comment-14736017
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9011:
---

Another possible solution is to accumulate partial reports in NN.  It seems 
fine since DN supposes to send all its partial reports at once.  NN can store 
the partial report in the block report lease temporarily.

The lease expiry time for partial reports can be very short, say 3 minutes.  
When NN receives a partial report, it stores it in the lease and renews the 
lease.  When NN receives the last partial report, it processes the full report. 
 When the lease expires, NN removes the accumulated partial reports and reject 
future partial reports with the same ID.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9037) Trash messages should be handled by Logger instead of being delivered on System.out

2015-09-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736026#comment-14736026
 ] 

Akira AJISAKA commented on HDFS-9037:
-

I'm +1 for this issue. We faced the problem that trash messages are logged to 
hive shell even in silent mode. I'll check the patch will fix it.

> Trash messages should be handled by Logger instead of being delivered on 
> System.out 
> 
>
> Key: HDFS-9037
> URL: https://issues.apache.org/jira/browse/HDFS-9037
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.6.0
>Reporter: Ashutosh Chauhan
>Assignee: Mingliang Liu
> Attachments: HDFS-9037.000.patch
>
>
> Specifically,
> {code}
>   if (success) {
>   System.out.println("Moved: '" + p + "' to trash at: " +
>   trash.getCurrentTrashDir() );
> }
> {code}
> should be:
> {code}
>   if (success) {
>   LOG.info("Moved: '" + p + "' to trash at: " + 
> trash.getCurrentTrashDir());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-09-08 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8287:
-
Attachment: HDFS-8287-HDFS-7285.08.patch

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch, HDFS-8287-HDFS-7285.08.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Status: Patch Available  (was: Open)

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >