[jira] [Commented] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290955#comment-14290955
 ] 

Hudson commented on HDFS-7676:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6926 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6926/])
HDFS-7676. Fix TestFileTruncate to avoid bug of HDFS-7611. Contributed by 
Konstantin Shvachko. (shv: rev 370396509deb5c9319c5db880f3e4058b20a7514)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 2.7.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290954#comment-14290954
 ] 

Hudson commented on HDFS-7611:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6926 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6926/])
HDFS-7676. Fix TestFileTruncate to avoid bug of HDFS-7611. Contributed by 
Konstantin Shvachko. (shv: rev 370396509deb5c9319c5db880f3e4058b20a7514)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> deleteSnapshot and delete of a file can leave orphaned blocks in the 
> blocksMap on NameNode restart.
> ---
>
> Key: HDFS-7611
> URL: https://issues.apache.org/jira/browse/HDFS-7611
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Byron Wong
>Priority: Critical
> Attachments: blocksNotDeletedTest.patch, testTruncateEditLogLoad.log
>
>
> If quotas are enabled a combination of operations *deleteSnapshot* and 
> *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
> restart. They are counted as missing on the NameNode, and can prevent 
> NameNode from coming out of safeMode and could cause memory leak during 
> startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7676:
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this to trunk and branch-2.

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 2.7.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290952#comment-14290952
 ] 

Konstantin Shvachko commented on HDFS-7676:
---

Test failures are clearly unrelated.

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 3.0.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7659) We should check the new length of truncate can't be a negative value.

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7659:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

Merged to branch-2.

> We should check the new length of truncate can't be a negative value.
> -
>
> Key: HDFS-7659
> URL: https://issues.apache.org/jira/browse/HDFS-7659
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HDFS-7659-branch2.patch, HDFS-7659.001.patch, 
> HDFS-7659.002.patch, HDFS-7659.003.patch
>
>
> It's obvious that we should check the new length of truncate can't be a 
> negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7643) Test case to ensure lazy persist files cannot be truncated

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7643:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

Merged to branch-2.

> Test case to ensure lazy persist files cannot be truncated
> --
>
> Key: HDFS-7643
> URL: https://issues.apache.org/jira/browse/HDFS-7643
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HDFS-7643.001.patch
>
>
> Task to add test case for HDFS-7634. Ensure that an attempt to truncate a 
> file created with LAZY_PERSIST policy is failed by the NameNode. For 
> reference see {{TestLazyPersistFiles#testAppendIsDenied}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7634) Disallow truncation of Lazy persist files

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7634:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

Merged to branch-2.

> Disallow truncation of Lazy persist files
> -
>
> Key: HDFS-7634
> URL: https://issues.apache.org/jira/browse/HDFS-7634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HDFS-7634.001.patch, HDFS-7634.002.patch
>
>
> Similar with {{append}}, lazy persist (memory) file should not support 
> truncate currently. Quote the reason from HDFS-6581 design doc:
> {quote}
> Appends to files created with the LAZY_PERSISTflag will not be allowed in the 
> initial implementation to avoid the complexity of keeping in­memory and 
> on­disk replicas in sync on a given DataNode.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7638:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

Merged to branch-2.

> Small fix and few refinements for FSN#truncate
> --
>
> Key: HDFS-7638
> URL: https://issues.apache.org/jira/browse/HDFS-7638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HDFS-7638.001.patch
>
>
> *1.* 
> {code}
> removeBlocks(collectedBlocks);
> {code}
> should be after {{logSync}}, as we do in other FSN places (rename, delete, 
> write with overwrite), the reason is discussed in HDFS-2815 and 
> https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
> *2.*
> {code}
> stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
> FSDirectory.isReservedRawName(src), true);
> {code}
> We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
> If audit log is not on, we don't need to get the file info.
> *3.*
> In {{truncateInternal}}, 
> {code}
> INodeFile file = iip.getLastINode().asFile();
> {code}
> is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7606:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

Merged to branch-2.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7606-1.patch, HDFS-7606.patch, HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7056:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

Merged to branch-2.

> Snapshot support for truncate
> -
>
> Key: HDFS-7056
> URL: https://issues.apache.org/jira/browse/HDFS-7056
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Fix For: 2.7.0
>
> Attachments: HDFS-3107-HDFS-7056-combined-13.patch, 
> HDFS-3107-HDFS-7056-combined-15.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-7056-13.patch, HDFS-7056-15.patch, HDFS-7056.15_branch2.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFSSnapshotWithTruncateDesign.docx, editsStored, editsStored.xml
>
>
> Implementation of truncate in HDFS-3107 does not allow truncating files which 
> are in a snapshot. It is desirable to be able to truncate and still keep the 
> old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3107) HDFS truncate

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-3107:
--
Fix Version/s: (was: 3.0.0)
   2.7.0

I just merged the following jiras to branch-2:
HDFS-3107, HDFS-7056, HDFS-7606, HDFS-7638, HDFS-7634, HDFS-7643, HADOOP-11490, 
HDFS-7659.

> HDFS truncate
> -
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Lei Chang
>Assignee: Plamen Jeliazkov
> Fix For: 2.7.0
>
> Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
> HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
> HDFS-3107.15_branch2.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf, 
> HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, 
> editsStored, editsStored.xml
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the 
> underlying storage when a transaction is aborted. Currently HDFS does not 
> support truncate (a standard Posix operation) which is a reverse operation of 
> append, which makes upper layer applications use ugly workarounds (such as 
> keeping track of the discarded byte range per file in a separate metadata 
> store, and periodically running a vacuum process to rewrite compacted files) 
> to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7659) We should check the new length of truncate can't be a negative value.

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7659:
--
Attachment: HDFS-7659-branch2.patch

Attaching patch for branch-2.

> We should check the new length of truncate can't be a negative value.
> -
>
> Key: HDFS-7659
> URL: https://issues.apache.org/jira/browse/HDFS-7659
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 3.0.0
>
> Attachments: HDFS-7659-branch2.patch, HDFS-7659.001.patch, 
> HDFS-7659.002.patch, HDFS-7659.003.patch
>
>
> It's obvious that we should check the new length of truncate can't be a 
> negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7584) Enable Quota Support for Storage Types (SSD)

2015-01-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290903#comment-14290903
 ] 

Zhe Zhang commented on HDFS-7584:
-

[~xyao] Will the legacy quota become deprecated after this change? If so we 
should mention it in the documentation. Otherwise we should also add some 
guideline on how to set both legacy and type-aware quotas. The above example, 
where the supposedly overall quora is smaller than aggregate type quotas, isn't 
so easy to understand.

Otherwise the patch looks good to me. Thanks again for the good work!

> Enable Quota Support for Storage Types (SSD) 
> -
>
> Key: HDFS-7584
> URL: https://issues.apache.org/jira/browse/HDFS-7584
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
> HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
> HDFS-7584.4.patch, editsStored
>
>
> Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
> This JIRA is opened to enable Quota support of different storage types in 
> terms of storage space usage. This is more important for certain storage 
> types such as SSD as it is precious and more performant. 
> As described in the design doc of HDFS-5682, we plan to add new 
> quotaByStorageType command and new name node RPC protocol for it. The quota 
> by storage type feature is applied to HDFS directory level similar to 
> traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7339) Allocating and persisting block groups in NameNode

2015-01-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290900#comment-14290900
 ] 

Zhe Zhang commented on HDFS-7339:
-

Thanks [~szetszwo]. I like the idea of using the first block to represent the 
block group. It could allow us to reuse block management code once we go over 
more details and make sure it's viable. Seems to me it should work for the 
striping layout: all block groups in a file share the same layout and schema, 
both of which can be obtained from the inode. 

When we implement EC with contiguous layout we need an explicit BlockGroup 
class, but it can be much simpler.

Regarding generation stamps: what if an EC block is lost and recovered? Should 
NN give the recovered block a new stamp?

> Allocating and persisting block groups in NameNode
> --
>
> Key: HDFS-7339
> URL: https://issues.apache.org/jira/browse/HDFS-7339
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-7339-001.patch, HDFS-7339-002.patch, 
> HDFS-7339-003.patch, HDFS-7339-004.patch, HDFS-7339-005.patch, 
> HDFS-7339-006.patch, Meta-striping.jpg, NN-stripping.jpg
>
>
> All erasure codec operations center around the concept of _block group_; they 
> are formed in initial encoding and looked up in recoveries and conversions. A 
> lightweight class {{BlockGroup}} is created to record the original and parity 
> blocks in a coding group, as well as a pointer to the codec schema (pluggable 
> codec schemas will be supported in HDFS-7337). With the striping layout, the 
> HDFS client needs to operate on all blocks in a {{BlockGroup}} concurrently. 
> Therefore we propose to extend a file’s inode to switch between _contiguous_ 
> and _striping_ modes, with the current mode recorded in a binary flag. An 
> array of BlockGroups (or BlockGroup IDs) is added, which remains empty for 
> “traditional” HDFS files with contiguous block layout.
> The NameNode creates and maintains {{BlockGroup}} instances through the new 
> {{ECManager}} component; the attached figure has an illustration of the 
> architecture. As a simple example, when a {_Striping+EC_} file is created and 
> written to, it will serve requests from the client to allocate new 
> {{BlockGroups}} and store them under the {{INodeFile}}. In the current phase, 
> {{BlockGroups}} are allocated both in initial online encoding and in the 
> conversion from replication to EC. {{ECManager}} also facilitates the lookup 
> of {{BlockGroup}} information for block recovery work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290897#comment-14290897
 ] 

Hadoop QA commented on HDFS-7676:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694390/HDFS-7676.patch
  against trunk revision 8f26d5a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9324//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9324//console

This message is automatically generated.

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 3.0.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2015-01-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290892#comment-14290892
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6133:
---

Thanks for working on this.  The pinning idea is very interesting!

A replica is pinned only if it is stored in a favored node.  In a write 
pipeline, it could possibly be some datanodes are favored nodes but some are 
not.  So the new parameters in DataTransferProtocol.writeBlock should be 
similar to storageType and targetStorageTypes, i.e. a new pinning parameter to 
the next datanodes and another new parameter targetPinnings for the downstream 
datanodes.
{code}
   public void writeBlock(final ExtendedBlock blk,
   final StorageType storageType, 
+  final boolean pinning,
   final Token blockToken,
   final String clientName,
   final DatanodeInfo[] targets,
   final StorageType[] targetStorageTypes, 
+  final boolean[] targetPinnings,
   final DatanodeInfo source,
   final BlockConstructionStage stage,
   final int pipelineSize,
{code}


> Make Balancer support exclude specified path
> 
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-6133-1.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, 
> HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path 
> prefix, like "/hbase", then we can run Balancer without destroying 
> Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7659) We should check the new length of truncate can't be a negative value.

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290889#comment-14290889
 ] 

Hudson commented on HDFS-7659:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6924 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6924/])
HDFS-7659. truncate should check negative value of the new length. Contributed 
by Yi Liu. (shv: rev e9fd46ddbf46954cfda4bb9c33f1789775be9d18)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> We should check the new length of truncate can't be a negative value.
> -
>
> Key: HDFS-7659
> URL: https://issues.apache.org/jira/browse/HDFS-7659
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 3.0.0
>
> Attachments: HDFS-7659.001.patch, HDFS-7659.002.patch, 
> HDFS-7659.003.patch
>
>
> It's obvious that we should check the new length of truncate can't be a 
> negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7659) We should check the new length of truncate can't be a negative value.

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7659:
--
  Resolution: Fixed
Target Version/s: 2.7.0  (was: 3.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I just committed this. Thank you Yi.

> We should check the new length of truncate can't be a negative value.
> -
>
> Key: HDFS-7659
> URL: https://issues.apache.org/jira/browse/HDFS-7659
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 3.0.0
>
> Attachments: HDFS-7659.001.patch, HDFS-7659.002.patch, 
> HDFS-7659.003.patch
>
>
> It's obvious that we should check the new length of truncate can't be a 
> negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290881#comment-14290881
 ] 

Konstantin Boudnik commented on HDFS-7676:
--

+1 - good catch!

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 3.0.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290864#comment-14290864
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3107:
---

Sure.  I think it is fine to merge.

> HDFS truncate
> -
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Lei Chang
>Assignee: Plamen Jeliazkov
> Fix For: 3.0.0
>
> Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
> HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
> HDFS-3107.15_branch2.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf, 
> HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, 
> editsStored, editsStored.xml
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the 
> underlying storage when a transaction is aborted. Currently HDFS does not 
> support truncate (a standard Posix operation) which is a reverse operation of 
> append, which makes upper layer applications use ugly workarounds (such as 
> keeping track of the discarded byte range per file in a separate metadata 
> store, and periodically running a vacuum process to rewrite compacted files) 
> to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290863#comment-14290863
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3107:
---

Sure.  I think it is fine to merge.

> HDFS truncate
> -
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Lei Chang
>Assignee: Plamen Jeliazkov
> Fix For: 3.0.0
>
> Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
> HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
> HDFS-3107.15_branch2.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf, 
> HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, 
> editsStored, editsStored.xml
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the 
> underlying storage when a transaction is aborted. Currently HDFS does not 
> support truncate (a standard Posix operation) which is a reverse operation of 
> append, which makes upper layer applications use ugly workarounds (such as 
> keeping track of the discarded byte range per file in a separate metadata 
> store, and periodically running a vacuum process to rewrite compacted files) 
> to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7672) Handle write failure for EC blocks

2015-01-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290859#comment-14290859
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7672:
---

Sure, please let me know when the new design is ready.

> Handle write failure for EC blocks
> --
>
> Key: HDFS-7672
> URL: https://issues.apache.org/jira/browse/HDFS-7672
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> For (6, 3)-Reed-Solomon, a client writes to 6 data blocks and 3 parity blocks 
> concurrently.  We need to handle datanode or network failures when writing a 
> EC BlockGroup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7676:
--
Status: Patch Available  (was: Open)

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 3.0.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7676:
--
Attachment: HDFS-7676.patch

The fix is to shrink edits so that it does not include transactions from 
previous runs. That way restarts in testTruncateEditLogLoad will not replay 
unrelated edits.
Ran TestFileTruncate with this patch many times - no failures.

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 3.0.0
>
> Attachments: HDFS-7676.patch
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reassigned HDFS-7676:
-

Assignee: Konstantin Shvachko

> Fix TestFileTruncate to avoid bug of HDFS-7611
> --
>
> Key: HDFS-7676
> URL: https://issues.apache.org/jira/browse/HDFS-7676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 3.0.0
>
>
> This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
> described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7676) Fix TestFileTruncate to avoid bug of HDFS-7611

2015-01-24 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7676:
-

 Summary: Fix TestFileTruncate to avoid bug of HDFS-7611
 Key: HDFS-7676
 URL: https://issues.apache.org/jira/browse/HDFS-7676
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko


This is to fix testTruncateEditLogLoad(), which is failing due to the bug 
described in HDFS-7611.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7341) Add initial snapshot support based on pipeline recovery

2015-01-24 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290814#comment-14290814
 ] 

Konstantin Shvachko commented on HDFS-7341:
---

Colin, are you still working on this.
Should we close it? 

> Add initial snapshot support based on pipeline recovery
> ---
>
> Key: HDFS-7341
> URL: https://issues.apache.org/jira/browse/HDFS-7341
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
> Attachments: HDFS-3107_Nov3.patch, editsStored_Nov3, 
> editsStored_Nov3.xml
>
>
> Add initial snapshot support based on pipeline recovery.  This iteration does 
> not support snapshots or rollback.  This support will be added in the 
> HDFS-3107 branch by later subtasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7675) Unused member DFSClient.spanReceiverHost

2015-01-24 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7675:
-

 Summary: Unused member DFSClient.spanReceiverHost
 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko


DFSClient.spanReceiverHost is initialised but never used. Could be redundant.
This was introduced by HDFS-7055.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7584) Enable Quota Support for Storage Types (SSD)

2015-01-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290736#comment-14290736
 ] 

Hadoop QA commented on HDFS-7584:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694377/HDFS-7584.4.patch
  against trunk revision 8f26d5a.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9323//console

This message is automatically generated.

> Enable Quota Support for Storage Types (SSD) 
> -
>
> Key: HDFS-7584
> URL: https://issues.apache.org/jira/browse/HDFS-7584
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
> HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
> HDFS-7584.4.patch, editsStored
>
>
> Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
> This JIRA is opened to enable Quota support of different storage types in 
> terms of storage space usage. This is more important for certain storage 
> types such as SSD as it is precious and more performant. 
> As described in the design doc of HDFS-5682, we plan to add new 
> quotaByStorageType command and new name node RPC protocol for it. The quota 
> by storage type feature is applied to HDFS directory level similar to 
> traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) startup used too much time to load edits

2015-01-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290735#comment-14290735
 ] 

Chris Nauroth commented on HDFS-7609:
-

Specifically, the retry cache was added in 2.1.0-beta, so the theory in my last 
comment would only be valid if you're running RPC clients older than that.

> startup used too much time to load edits
> 
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
> Attachments: HDFS-7609-CreateEditsLogWithRPCIDs.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7584) Enable Quota Support for Storage Types (SSD)

2015-01-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7584:
-
Status: Patch Available  (was: Open)

> Enable Quota Support for Storage Types (SSD) 
> -
>
> Key: HDFS-7584
> URL: https://issues.apache.org/jira/browse/HDFS-7584
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
> HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
> HDFS-7584.4.patch, editsStored
>
>
> Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
> This JIRA is opened to enable Quota support of different storage types in 
> terms of storage space usage. This is more important for certain storage 
> types such as SSD as it is precious and more performant. 
> As described in the design doc of HDFS-5682, we plan to add new 
> quotaByStorageType command and new name node RPC protocol for it. The quota 
> by storage type feature is applied to HDFS directory level similar to 
> traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7584) Enable Quota Support for Storage Types (SSD)

2015-01-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7584:
-
Attachment: HDFS-7584.4.patch

Update patch with refactoring and setReplication handling.

> Enable Quota Support for Storage Types (SSD) 
> -
>
> Key: HDFS-7584
> URL: https://issues.apache.org/jira/browse/HDFS-7584
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
> HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
> HDFS-7584.4.patch, editsStored
>
>
> Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
> This JIRA is opened to enable Quota support of different storage types in 
> terms of storage space usage. This is more important for certain storage 
> types such as SSD as it is precious and more performant. 
> As described in the design doc of HDFS-5682, we plan to add new 
> quotaByStorageType command and new name node RPC protocol for it. The quota 
> by storage type feature is applied to HDFS directory level similar to 
> traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) startup used too much time to load edits

2015-01-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290731#comment-14290731
 ] 

Chris Nauroth commented on HDFS-7609:
-

[~kihwal] and [~mingma], thank you for the additional details.  It looks like 
in your case, you noticed the slowdown in the standby NN tailing the edits.  I 
had focused on profiling NN process startup as described in the original 
problem report.  I'll take a look at the standby too.

{{PriorityQueue#remove}} is O\(n\), so that definitely could be problematic.  
It's odd that there would be so many collisions that this would become 
noticeable though.  Are any of you running a significant number of legacy 
applications linked to the RPC code before introduction of the retry cache 
support?  If that were the case, then perhaps a huge number of calls are not 
supplying a call ID, and then the NN is getting a default call ID value from 
protobuf decoding, thus causing a lot of collisions.

bq. If PriorityQueue.remove() took much time, can we utilize 
PriorityQueue.removeAll(Collection) so that multiple CacheEntry's are removed 
in one round ?

Unfortunately, I don't think our usage pattern is amenable to that change.  We 
apply transactions one by one.  Switching to {{removeAll}} implies a pretty big 
code restructuring to batch up retry cache entries before the calls into the 
retry cache.  Encountering a huge number of collisions is unexpected, so I'd 
prefer to investigate that.

> startup used too much time to load edits
> 
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
> Attachments: HDFS-7609-CreateEditsLogWithRPCIDs.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7311) TestLeaseRecovery2 sometimes fails in trunk

2015-01-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-7311.
--
Resolution: Cannot Reproduce

> TestLeaseRecovery2 sometimes fails in trunk
> ---
>
> Key: HDFS-7311
> URL: https://issues.apache.org/jira/browse/HDFS-7311
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1917/ :
> {code}
> REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery
> Error Message:
> Call From asf909.gq1.ygridcore.net/67.195.81.153 to localhost:55061 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> Stack Trace:
> java.net.ConnectException: Call From asf909.gq1.ygridcore.net/67.195.81.153 
> to localhost:55061 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
> at org.apache.hadoop.ipc.Client.call(Client.java:1438)
> at org.apache.hadoop.ipc.Client.call(Client.java:1399)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy19.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
> at com.sun.proxy.$Proxy20.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1694)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1654)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1579)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:276)
> FAILED:  
> org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2
> Error Message:
> Test resulted in an unexpected exit
> Stack Trace:
> java.lang.AssertionError: Test resulted in an unexpected exit
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1709)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1696)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:105)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7464) TestDFSAdminWithHA#testRefreshSuperUserGroupsConfiguration fails against Java 8

2015-01-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-7464.
--
Resolution: Cannot Reproduce

> TestDFSAdminWithHA#testRefreshSuperUserGroupsConfiguration fails against Java 
> 8
> ---
>
> Key: HDFS-7464
> URL: https://issues.apache.org/jira/browse/HDFS-7464
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/23/ :
> {code}
> REGRESSION:  
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfiguration
> Error Message:
> refreshSuperUserGroupsConfiguration: End of File Exception between local host 
> is: "asf908.gq1.ygridcore.net/67.195.81.152"; destination host is: 
> "localhost":12700; : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException expected:<0> but was:<-1>
> Stack Trace:
> java.lang.AssertionError: refreshSuperUserGroupsConfiguration: End of File 
> Exception between local host is: "asf908.gq1.ygridcore.net/67.195.81.152"; 
> destination host is: "localhost":12700; : java.io.EOFException; For more 
> details see:  http://wiki.apache.org/hadoop/EOFException expected:<0> but 
> was:<-1>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfiguration(TestDFSAdminWithHA.java:228)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7422) TestEncryptionZonesWithKMS fails against Java 8

2015-01-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-7422.
--
Resolution: Cannot Reproduce

> TestEncryptionZonesWithKMS fails against Java 8
> ---
>
> Key: HDFS-7422
> URL: https://issues.apache.org/jira/browse/HDFS-7422
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/12/ :
> {code}
> REGRESSION:  
> org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testReadWriteUsingWebHdfs
> Error Message:
> Stream closed.
> Stack Trace:
> java.io.IOException: Stream closed.
> at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:385)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:91)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:656)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:622)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:458)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:487)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:483)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$UnresolvedUrlOpener.connect(WebHdfsFileSystem.java:1204)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:120)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:104)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.(ByteRangeInputStream.java:89)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$OffsetUrlInputStream.(WebHdfsFileSystem.java:1261)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1175)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.verifyFilesEqual(DFSTestUtil.java:1399)
> at 
> org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs(TestEncryptionZones.java:634)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException: Stream closed.
> at 
> org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:165)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:353)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:91)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:608)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:458)
> at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:487)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
> a

[jira] [Commented] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-01-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290686#comment-14290686
 ] 

Ted Yu commented on HDFS-7471:
--

This test passed in recent builds.

> TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
> -
>
> Key: HDFS-7471
> URL: https://issues.apache.org/jira/browse/HDFS-7471
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Binglin Chang
> Attachments: HDFS-7471.001.patch
>
>
> From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
> {code}
> FAILED:  
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> Error Message:
> The map of version counts returned by DatanodeManager was not what it was 
> expected to be on iteration 237 expected:<0> but was:<1>
> Stack Trace:
> java.lang.AssertionError: The map of version counts returned by 
> DatanodeManager was not what it was expected to be on iteration 237 
> expected:<0> but was:<1>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7567) Potential null dereference in FSEditLogLoader#applyEditLogOp()

2015-01-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7567:
-
Resolution: Later
Status: Resolved  (was: Patch Available)

> Potential null dereference in FSEditLogLoader#applyEditLogOp()
> --
>
> Key: HDFS-7567
> URL: https://issues.apache.org/jira/browse/HDFS-7567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7567.patch
>
>
> {code}
>   INodeFile oldFile = INodeFile.valueOf(iip.getLastINode(), path, true);
>   if (oldFile != null && addCloseOp.overwrite) {
> ...
>   INodeFile newFile = oldFile;
> ...
>   // Update the salient file attributes.
>   newFile.setAccessTime(addCloseOp.atime, Snapshot.CURRENT_STATE_ID);
>   newFile.setModificationTime(addCloseOp.mtime, 
> Snapshot.CURRENT_STATE_ID);
> {code}
> The last two lines are not protected by null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290668#comment-14290668
 ] 

Hudson commented on HDFS-7667:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2034/])
HDFS-7667. Various typos and improvements to HDFS Federation doc  (Charles Lamb 
via aw) (aw: rev d411460e0d66b9b9d58924df295a957ba84b17d7)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Various typos and improvements to HDFS Federation doc
> -
>
> Key: HDFS-7667
> URL: https://issues.apache.org/jira/browse/HDFS-7667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7667.000.patch, HDFS-7667.001.patch
>
>
> Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
> Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290666#comment-14290666
 ] 

Hudson commented on HDFS-7320:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2034/])
HDFS-7320. The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
(Masatake Iwasaki via aw) (aw: rev 8f26d5a8a13539e8292c1cf7f141eff7e58984a5)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3750) API docs don't include HDFS

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290669#comment-14290669
 ] 

Hudson commented on HDFS-3750:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2034/])
HDFS-3750. API docs don't include HDFS (Jolly Chen via aw) (aw: rev 
6c3fec5ec25caabbd8c5ac795a5bc5229b5365de)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* pom.xml


> API docs don't include HDFS
> ---
>
> Key: HDFS-3750
> URL: https://issues.apache.org/jira/browse/HDFS-3750
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Jolly Chen
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3750.patch
>
>
> [The javadocs|http://hadoop.apache.org/common/docs/current/api/index.html] 
> don't include HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7644) minor typo in HttpFS doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290667#comment-14290667
 ] 

Hudson commented on HDFS-7644:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2034/])
HDFS-7644. minor typo in HttpFS doc (Charles Lamb via aw) (aw: rev 
5c93ca2f3cfd9ebcb98be89c3a238a36c03f4422)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> minor typo in HttpFS doc
> 
>
> Key: HDFS-7644
> URL: https://issues.apache.org/jira/browse/HDFS-7644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7644.000.patch
>
>
> In hadoop-httpfs/src/site/apt/index.apt.vm, s/seening/seen/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3750) API docs don't include HDFS

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290650#comment-14290650
 ] 

Hudson commented on HDFS-3750:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/84/])
HDFS-3750. API docs don't include HDFS (Jolly Chen via aw) (aw: rev 
6c3fec5ec25caabbd8c5ac795a5bc5229b5365de)
* pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> API docs don't include HDFS
> ---
>
> Key: HDFS-3750
> URL: https://issues.apache.org/jira/browse/HDFS-3750
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Jolly Chen
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3750.patch
>
>
> [The javadocs|http://hadoop.apache.org/common/docs/current/api/index.html] 
> don't include HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290647#comment-14290647
 ] 

Hudson commented on HDFS-7320:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/84/])
HDFS-7320. The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
(Masatake Iwasaki via aw) (aw: rev 8f26d5a8a13539e8292c1cf7f141eff7e58984a5)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7644) minor typo in HttpFS doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290648#comment-14290648
 ] 

Hudson commented on HDFS-7644:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/84/])
HDFS-7644. minor typo in HttpFS doc (Charles Lamb via aw) (aw: rev 
5c93ca2f3cfd9ebcb98be89c3a238a36c03f4422)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> minor typo in HttpFS doc
> 
>
> Key: HDFS-7644
> URL: https://issues.apache.org/jira/browse/HDFS-7644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7644.000.patch
>
>
> In hadoop-httpfs/src/site/apt/index.apt.vm, s/seening/seen/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290649#comment-14290649
 ] 

Hudson commented on HDFS-7667:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/84/])
HDFS-7667. Various typos and improvements to HDFS Federation doc  (Charles Lamb 
via aw) (aw: rev d411460e0d66b9b9d58924df295a957ba84b17d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm


> Various typos and improvements to HDFS Federation doc
> -
>
> Key: HDFS-7667
> URL: https://issues.apache.org/jira/browse/HDFS-7667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7667.000.patch, HDFS-7667.001.patch
>
>
> Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
> Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3750) API docs don't include HDFS

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290617#comment-14290617
 ] 

Hudson commented on HDFS-3750:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2015 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2015/])
HDFS-3750. API docs don't include HDFS (Jolly Chen via aw) (aw: rev 
6c3fec5ec25caabbd8c5ac795a5bc5229b5365de)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* pom.xml


> API docs don't include HDFS
> ---
>
> Key: HDFS-3750
> URL: https://issues.apache.org/jira/browse/HDFS-3750
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Jolly Chen
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3750.patch
>
>
> [The javadocs|http://hadoop.apache.org/common/docs/current/api/index.html] 
> don't include HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7644) minor typo in HttpFS doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290614#comment-14290614
 ] 

Hudson commented on HDFS-7644:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2015 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2015/])
HDFS-7644. minor typo in HttpFS doc (Charles Lamb via aw) (aw: rev 
5c93ca2f3cfd9ebcb98be89c3a238a36c03f4422)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm


> minor typo in HttpFS doc
> 
>
> Key: HDFS-7644
> URL: https://issues.apache.org/jira/browse/HDFS-7644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7644.000.patch
>
>
> In hadoop-httpfs/src/site/apt/index.apt.vm, s/seening/seen/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290613#comment-14290613
 ] 

Hudson commented on HDFS-7320:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2015 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2015/])
HDFS-7320. The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
(Masatake Iwasaki via aw) (aw: rev 8f26d5a8a13539e8292c1cf7f141eff7e58984a5)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290615#comment-14290615
 ] 

Hudson commented on HDFS-7667:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2015 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2015/])
HDFS-7667. Various typos and improvements to HDFS Federation doc  (Charles Lamb 
via aw) (aw: rev d411460e0d66b9b9d58924df295a957ba84b17d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm


> Various typos and improvements to HDFS Federation doc
> -
>
> Key: HDFS-7667
> URL: https://issues.apache.org/jira/browse/HDFS-7667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7667.000.patch, HDFS-7667.001.patch
>
>
> Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
> Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290608#comment-14290608
 ] 

Hudson commented on HDFS-7667:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #80 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/80/])
HDFS-7667. Various typos and improvements to HDFS Federation doc  (Charles Lamb 
via aw) (aw: rev d411460e0d66b9b9d58924df295a957ba84b17d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm


> Various typos and improvements to HDFS Federation doc
> -
>
> Key: HDFS-7667
> URL: https://issues.apache.org/jira/browse/HDFS-7667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7667.000.patch, HDFS-7667.001.patch
>
>
> Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
> Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7644) minor typo in HttpFS doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290607#comment-14290607
 ] 

Hudson commented on HDFS-7644:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #80 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/80/])
HDFS-7644. minor typo in HttpFS doc (Charles Lamb via aw) (aw: rev 
5c93ca2f3cfd9ebcb98be89c3a238a36c03f4422)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm


> minor typo in HttpFS doc
> 
>
> Key: HDFS-7644
> URL: https://issues.apache.org/jira/browse/HDFS-7644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7644.000.patch
>
>
> In hadoop-httpfs/src/site/apt/index.apt.vm, s/seening/seen/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3750) API docs don't include HDFS

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290610#comment-14290610
 ] 

Hudson commented on HDFS-3750:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #80 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/80/])
HDFS-3750. API docs don't include HDFS (Jolly Chen via aw) (aw: rev 
6c3fec5ec25caabbd8c5ac795a5bc5229b5365de)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* pom.xml


> API docs don't include HDFS
> ---
>
> Key: HDFS-3750
> URL: https://issues.apache.org/jira/browse/HDFS-3750
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Jolly Chen
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3750.patch
>
>
> [The javadocs|http://hadoop.apache.org/common/docs/current/api/index.html] 
> don't include HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290606#comment-14290606
 ] 

Hudson commented on HDFS-7320:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #80 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/80/])
HDFS-7320. The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
(Masatake Iwasaki via aw) (aw: rev 8f26d5a8a13539e8292c1cf7f141eff7e58984a5)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml


> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7644) minor typo in HttpFS doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290559#comment-14290559
 ] 

Hudson commented on HDFS-7644:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #817 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/817/])
HDFS-7644. minor typo in HttpFS doc (Charles Lamb via aw) (aw: rev 
5c93ca2f3cfd9ebcb98be89c3a238a36c03f4422)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm


> minor typo in HttpFS doc
> 
>
> Key: HDFS-7644
> URL: https://issues.apache.org/jira/browse/HDFS-7644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7644.000.patch
>
>
> In hadoop-httpfs/src/site/apt/index.apt.vm, s/seening/seen/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3750) API docs don't include HDFS

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290562#comment-14290562
 ] 

Hudson commented on HDFS-3750:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #817 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/817/])
HDFS-3750. API docs don't include HDFS (Jolly Chen via aw) (aw: rev 
6c3fec5ec25caabbd8c5ac795a5bc5229b5365de)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* pom.xml


> API docs don't include HDFS
> ---
>
> Key: HDFS-3750
> URL: https://issues.apache.org/jira/browse/HDFS-3750
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Jolly Chen
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3750.patch
>
>
> [The javadocs|http://hadoop.apache.org/common/docs/current/api/index.html] 
> don't include HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290560#comment-14290560
 ] 

Hudson commented on HDFS-7667:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #817 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/817/])
HDFS-7667. Various typos and improvements to HDFS Federation doc  (Charles Lamb 
via aw) (aw: rev d411460e0d66b9b9d58924df295a957ba84b17d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm


> Various typos and improvements to HDFS Federation doc
> -
>
> Key: HDFS-7667
> URL: https://issues.apache.org/jira/browse/HDFS-7667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7667.000.patch, HDFS-7667.001.patch
>
>
> Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
> Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290558#comment-14290558
 ] 

Hudson commented on HDFS-7320:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #817 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/817/])
HDFS-7320. The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
(Masatake Iwasaki via aw) (aw: rev 8f26d5a8a13539e8292c1cf7f141eff7e58984a5)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290552#comment-14290552
 ] 

Kai Zheng commented on HDFS-7353:
-

Hi [~szetszwo],

Thanks for your thorough and great review.
bq. ec also can mean error correcting...
Could we discuss the naming in the overall master JIRA HDFS-7285. When we have 
a decision, we need to ensure it's consistent in all the places, not just in 
this part.
bq. Should the package be moved under hdfs?
Let's discuss this in HDFS-7337, the 'master' JIRA of this. Raw erasure coder 
is a part of the codec framework.
bq. By "The number of elements", do you mean "length in bytes"? Should it be 
long instead of int?
Sorry for the confusion. I need to update the obsolete comment. It's not 
"length in bytes", but regarding how many stripping units. A unit can be a 
byte, a chunk or buffer and even a block.

For other comments, I will surely get them resolved and provide new patch. Good 
catches!

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HDFS-7353
> URL: https://issues.apache.org/jira/browse/HDFS-7353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-01-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290550#comment-14290550
 ] 

Kai Zheng commented on HDFS-7285:
-

>From HDFS-7353, posted by [~szetszwo], suggesting we use 'erasure' package 
>name instead of 'ec'.
bq. ec also can mean error correcting. How about renaming the package to 
io.erasure? Then, using EC inside the package won't be ambiguous.
I'm not sure about this, but we'd better discuss this overall and have the 
conclusion. If decided, we should use it consistently in places regarding 
design, discussion, codes and etc. Currently, we all use EC/ec to mention about 
erasure coding. Does it conflict with error correction? Is there any related 
work about error correction? If not, I guess we could still use EC as we might 
not wish to change all the places. A better naming is good, being consistent is 
important for the big effort.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2015-01-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290546#comment-14290546
 ] 

Kai Zheng commented on HDFS-7337:
-

Hi [~szetszwo],

It would be great if the erasure codec work can be worked out and used in other 
context, anyway it's better not to tightly couple with HDFS. Having the codes 
stay in hadoop-common side, it would avoid many basic bootstrap work when 
support and incorporate native libraries as compression, encryption and etc. do.



> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2015-01-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290544#comment-14290544
 ] 

Kai Zheng commented on HDFS-7337:
-

Thanks [~andrew.wang] for the clarification. I agree.

> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7672) Handle write failure for EC blocks

2015-01-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290539#comment-14290539
 ] 

Kai Zheng commented on HDFS-7672:
-

Thanks for taking care of this. As handling writing failure is essential to the 
EC support in both DataNode and client (by HDFS-7344 and HDFS-7545), we have 
already some discussion about it. Looks like we need to update the designs to 
reflect the discussion. When we upload the initial codes or design there, would 
you help review them and provide your thoughts then? I thought it will help to 
clarify how to collaborate and avoid duplicate effort in this aspect.

> Handle write failure for EC blocks
> --
>
> Key: HDFS-7672
> URL: https://issues.apache.org/jira/browse/HDFS-7672
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> For (6, 3)-Reed-Solomon, a client writes to 6 data blocks and 3 parity blocks 
> concurrently.  We need to handle datanode or network failures when writing a 
> EC BlockGroup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3750) API docs don't include HDFS

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290534#comment-14290534
 ] 

Hudson commented on HDFS-3750:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/83/])
HDFS-3750. API docs don't include HDFS (Jolly Chen via aw) (aw: rev 
6c3fec5ec25caabbd8c5ac795a5bc5229b5365de)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* pom.xml


> API docs don't include HDFS
> ---
>
> Key: HDFS-3750
> URL: https://issues.apache.org/jira/browse/HDFS-3750
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Jolly Chen
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3750.patch
>
>
> [The javadocs|http://hadoop.apache.org/common/docs/current/api/index.html] 
> don't include HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290530#comment-14290530
 ] 

Hudson commented on HDFS-7320:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/83/])
HDFS-7320. The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
(Masatake Iwasaki via aw) (aw: rev 8f26d5a8a13539e8292c1cf7f141eff7e58984a5)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml


> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290532#comment-14290532
 ] 

Hudson commented on HDFS-7667:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/83/])
HDFS-7667. Various typos and improvements to HDFS Federation doc  (Charles Lamb 
via aw) (aw: rev d411460e0d66b9b9d58924df295a957ba84b17d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Federation.apt.vm


> Various typos and improvements to HDFS Federation doc
> -
>
> Key: HDFS-7667
> URL: https://issues.apache.org/jira/browse/HDFS-7667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7667.000.patch, HDFS-7667.001.patch
>
>
> Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
> Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7644) minor typo in HttpFS doc

2015-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290531#comment-14290531
 ] 

Hudson commented on HDFS-7644:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/83/])
HDFS-7644. minor typo in HttpFS doc (Charles Lamb via aw) (aw: rev 
5c93ca2f3cfd9ebcb98be89c3a238a36c03f4422)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm


> minor typo in HttpFS doc
> 
>
> Key: HDFS-7644
> URL: https://issues.apache.org/jira/browse/HDFS-7644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7644.000.patch
>
>
> In hadoop-httpfs/src/site/apt/index.apt.vm, s/seening/seen/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7674) Adding metrics for Erasure Coding

2015-01-24 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-7674:
---

 Summary: Adding metrics for Erasure Coding
 Key: HDFS-7674
 URL: https://issues.apache.org/jira/browse/HDFS-7674
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


As the design (in HDFS-7285) indicates, erasure coding involves non-trivial 
impact and workload for NameNode, DataNode and client; it also allows 
configurable and pluggable erasure codec and schema with flexible tradeoff 
options (see HDFS-7337). To support necessary analysis and adjustment, we'd 
better have various meaningful metrics for the EC support, like 
encoding/decoding tasks, recovered blocks, read/transferred data size, 
computation time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)