[jira] [Commented] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035752#comment-16035752
 ] 

Hadoop QA commented on HDFS-11923:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11923 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871081/HDFS-11923.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e9d4848fb767 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73ecb19 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19759/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19759/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19759/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19759/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



[jira] [Commented] (HDFS-11880) Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers

2017-06-02 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035705#comment-16035705
 ] 

Nandakumar commented on HDFS-11880:
---

The following classes are moved from {{hadoop-hdfs-project/hadoop-hdfs}} to 
{{hadoop-hdfs-project/hadoop-hdfs-client}}
* OzoneConsts
* OzoneAcl
* TestOzoneAcls

> Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo 
> from KSM wrappers
> ---
>
> Key: HDFS-11880
> URL: https://issues.apache.org/jira/browse/HDFS-11880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-11880-HDFS-7240.000.patch
>
>
> KSM wrappers like KsmBucketInfo and KsmBucketArgs are using protobuf formats 
> such as StorageTypeProto and OzoneAclInfo, this jira is to remove the 
> dependency and use {{StorageType}} and {{OzoneAcl}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11880) Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo from KSM wrappers

2017-06-02 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11880:
--
Attachment: HDFS-11880-HDFS-7240.000.patch

> Ozone: KSM: Remove protobuf formats such as StorageTypeProto and OzoneAclInfo 
> from KSM wrappers
> ---
>
> Key: HDFS-11880
> URL: https://issues.apache.org/jira/browse/HDFS-11880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-11880-HDFS-7240.000.patch
>
>
> KSM wrappers like KsmBucketInfo and KsmBucketArgs are using protobuf formats 
> such as StorageTypeProto and OzoneAclInfo, this jira is to remove the 
> dependency and use {{StorageType}} and {{OzoneAcl}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11771) Ozone: KSM: Add checkVolumeAccess

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11771:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~xyao] Thanks for the review comments. [~msingh] Thanks for the contribution. 
I have committed this to the feature branch.

> Ozone: KSM:  Add checkVolumeAccess
> --
>
> Key: HDFS-11771
> URL: https://issues.apache.org/jira/browse/HDFS-11771
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11771-HDFS-7240.001.patch, 
> HDFS-11771-HDFS-7240.002.patch, HDFS-11771-HDFS-7240.003.patch, 
> HDFS-11771-HDFS-7240.004.patch, HDFS-11771-HDFS-7240.005.patch, 
> HDFS-11771-HDFS-7240.006.patch, HDFS-11771-HDFS-7240.007.patch
>
>
> Checks if the caller has access to a given volume. This call supports the 
> ACLs specified in the ozone rest protocol documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035675#comment-16035675
 ] 

Hadoop QA commented on HDFS-11920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.cblock.TestCBlockServer |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871065/HDFS-11920-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 81c4e638fbf5 

[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035663#comment-16035663
 ] 

George Huang commented on HDFS-11912:
-

10.
  /** Delete an existing test directory */
  private void deleteTestDir() throws IOException {
if (snapshottableDirectories.size() > 0) {
  int index = GENERATOR.nextInt(snapshottableDirectories.size());
  Path deleteDir = snapshottableDirectories.get(index);

  if (!pathToSnapshotsMap.containsKey(deleteDir)) {
   .. .. // deletion
  }
deleteTestDir and renameTestDir have the above model where the deletion or 
rename task is only performed when directory does not exist in 
pathToSnapshotsMap. But, pathToSnapshotsMap will always have this home dir 
after the first snapshot created. So, deletions and renames will always be 
skipped after the first snapshot taken?

=> See response above. pathTosnapshotsMap will have multile entries as 
snapshottableDirectory will have multiple entries (size() > 0) as explained 
earlier. This was verified by multiple local runs.

Many thanks!

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11742) Improve balancer usability after HDFS-8818

2017-06-02 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035658#comment-16035658
 ] 

Vinod Kumar Vavilapalli commented on HDFS-11742:


bq. If 2.8.1 is put up for vote with this, I will have to -1 the release.
bq. It will affect many users, if it is included in a release as is. I will -1 
the release if the issue is not properly addressed.
I'm pushing for the next 2.8 maint release as well as 2.7.x. [~kihwal] / 
[~szetszwo], can you please help get ourselves to a convergence? Thanks.

> Improve balancer usability after HDFS-8818
> --
>
> Key: HDFS-11742
> URL: https://issues.apache.org/jira/browse/HDFS-11742
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
>  Labels: release-blocker
> Attachments: balancer2.8.png, HDFS-11742.branch-2.8.patch, 
> HDFS-11742.branch-2.patch, HDFS-11742.trunk.patch, HDFS-11742.v2.trunk.patch
>
>
> We ran 2.8 balancer with HDFS-8818 on a 280-node and a 2,400-node cluster. In 
> both cases, it would hang forever after two iterations. The two iterations 
> were also moving things at a significantly lower rate. The hang itself is 
> fixed by HDFS-11377, but the design limitation remains, so the balancer 
> throughput ends up actually lower.
> Instead of reverting HDFS-8188 as originally suggested, I am making a small 
> change to make it less error prone and more usable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11743) Revert HDFS-7933 from branch-2.7 (fsck reporting decommissioning replicas)

2017-06-02 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035648#comment-16035648
 ] 

Vinod Kumar Vavilapalli commented on HDFS-11743:


[~zhz], I'm pushing for a 2.7.4, can you explain why it has to be reverted?



> Revert HDFS-7933 from branch-2.7 (fsck reporting decommissioning replicas)
> --
>
> Key: HDFS-11743
> URL: https://issues.apache.org/jira/browse/HDFS-11743
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
>  Labels: release-blocker
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11771) Ozone: KSM: Add checkVolumeAccess

2017-06-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035639#comment-16035639
 ] 

Anu Engineer commented on HDFS-11771:
-

+1,  I will commit this shortly. 

> Ozone: KSM:  Add checkVolumeAccess
> --
>
> Key: HDFS-11771
> URL: https://issues.apache.org/jira/browse/HDFS-11771
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11771-HDFS-7240.001.patch, 
> HDFS-11771-HDFS-7240.002.patch, HDFS-11771-HDFS-7240.003.patch, 
> HDFS-11771-HDFS-7240.004.patch, HDFS-11771-HDFS-7240.005.patch, 
> HDFS-11771-HDFS-7240.006.patch, HDFS-11771-HDFS-7240.007.patch
>
>
> Checks if the caller has access to a given volume. This call supports the 
> ACLs specified in the ozone rest protocol documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11708) Positional read will fail if replicas moved to different DNs after stream is opened

2017-06-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035632#comment-16035632
 ] 

Konstantin Shvachko commented on HDFS-11708:


Latest patch HDFS-11708-07.patch is not applying cleanly to trunk. Something 
wrong with TestBlockReplacement.
Other versions seem to be good to go.

> Positional read will fail if replicas moved to different DNs after stream is 
> opened
> ---
>
> Key: HDFS-11708
> URL: https://issues.apache.org/jira/browse/HDFS-11708
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
>  Labels: release-blocker
> Attachments: HDFS-11708-01.patch, HDFS-11708-02.patch, 
> HDFS-11708-03.patch, HDFS-11708-04.patch, HDFS-11708-05.patch, 
> HDFS-11708-07.patch, HDFS-11708-branch-2-07.patch, 
> HDFS-11708-branch-2.7-07.patch, HDFS-11708-branch-2.8-07.patch, 
> HDFS-11708-HDFS-11898-06.patch
>
>
> Scenario:
> 1. File was written to DN1, DN2 with RF=2
> 2. File stream opened to read and kept. Block Locations are [DN1,DN2]
> 3. One of the replica (DN2) moved to another datanode (DN3) due to datanode 
> dead/balancing/etc.
> 4. Latest block locations in NameNode will be DN1 and DN3 in the 'same order'
> 5. DN1 went down, but not yet detected as dead in NameNode.
> 6. Client start reading using positional read api "read(pos, buf[], offset, 
> length)"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11921) Ozone: KSM: Unable to put keys with zero length

2017-06-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035630#comment-16035630
 ] 

Anu Engineer commented on HDFS-11921:
-

The fix for HDFS-11796 could have fixed this issue. We added a check where we 
always request at least a byte sized block from SCM.  So keeping this JIRA open 
to validate this issue.


> Ozone: KSM: Unable to put keys with zero length
> ---
>
> Key: HDFS-11921
> URL: https://issues.apache.org/jira/browse/HDFS-11921
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
>
> As part of working on HDFS-11909, I was trying to put zero length keys. I 
> found that put key refuses to do that. Here is the call trace, 
> bq.   at ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock 
> we check if the block size is greater than 0, which makes sense since we 
> should not call into SCM to allocate a block of zero size.
> However these 2 calls are invoked for creating the key, so that metadata for 
> key can be created, we should probably take care of this behavior here.
> bq. ksm.KeyManagerImpl.allocateKey
> bq. ksm.KeySpaceManager.allocateKey(KeySpaceManager.java:428)
> Another way to fix this might be to just allocate a block with at least 1 
> byte always, which might be easier than special casing code.
> [~vagarychen] Would you like to fix this in the next patch you are working on 
> ? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11923:
--
Attachment: HDFS-11923.002.patch

fix checkstyle in v002 patch.

> Stress test of DFSNetworkTopology
> -
>
> Key: HDFS-11923
> URL: https://issues.apache.org/jira/browse/HDFS-11923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11923.001.patch, HDFS-11923.002.patch
>
>
> I wrote a stress test with {{DFSNetworkTopology}} to verify its correctness 
> under huge number of datanode changes e.g., data node insert/delete, storage 
> addition/removal etc. The goal is to show that the topology maintains the 
> correct counters all time. The test is written that, unless manually 
> terminated, it will keep randomly performing the operations nonstop. (and 
> because of this, the test is ignored in the patch).
> My local test lasted 40 min before I stopped it, it involved more than one 
> million datanode changes, and no error happened. We believe this should be 
> sufficient to show the correctness of {{DFSNetworkTopology}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11796:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~vagarychen] , [~cheersyang] and [~xyao] Thanks for the reviews and comments. 
[~msingh] thanks for the contribution. I have committed this patch to the 
feature branch.

> Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly
> -
>
> Key: HDFS-11796
> URL: https://issues.apache.org/jira/browse/HDFS-11796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11796-HDFS-7240.001.patch, 
> HDFS-11796-HDFS-7240.002.patch
>
>
> MiniOzoneCluster currently doest not set "ozone.handler.type" key correctly.
> Handler type passed with setHandlerType is ignored silently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11898) DFSClient#isHedgedReadsEnabled() should be per client flag

2017-06-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035615#comment-16035615
 ] 

Konstantin Shvachko commented on HDFS-11898:


I don't think new variable {{hedgedReadEnabled}} is needed. You should reset 
{{HEDGED_READ_THREAD_POOL}} as I commented above.
Not sure you addressed any of my comments in the last patch.
I think HDFS-11900 should be combined with this issue.

> DFSClient#isHedgedReadsEnabled() should be per client flag 
> ---
>
> Key: HDFS-11898
> URL: https://issues.apache.org/jira/browse/HDFS-11898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11898-01.patch, HDFS-11898-02.patch
>
>
> DFSClient#isHedgedReadsEnabled() returns value based on static 
> {{HEDGED_READ_THREAD_POOL}}. 
> Hence if any of the client initialized this in JVM, all remaining client 
> reads will be going through hedged read itself.
> This flag should be per client value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035607#comment-16035607
 ] 

Hadoop QA commented on HDFS-11923:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 31 unchanged - 0 fixed = 39 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11923 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871056/HDFS-11923.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c691cc8d978 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73ecb19 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19757/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19757/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035604#comment-16035604
 ] 

Hadoop QA commented on HDFS-11796:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m  
3s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
17s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11796 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871049/HDFS-11796-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a8bdf27fff93 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5cdd880 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19756/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19756/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19756/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-11777) Ozone: KSM: add deleteBucket

2017-06-02 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11777:
--
Attachment: HDFS-11777-HDFS-7240.000.patch

> Ozone: KSM: add deleteBucket
> 
>
> Key: HDFS-11777
> URL: https://issues.apache.org/jira/browse/HDFS-11777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Nandakumar
> Attachments: HDFS-11777-HDFS-7240.000.patch
>
>
> Allows a bucket to to be deleted if there are no keys in the bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11777) Ozone: KSM: add deleteBucket

2017-06-02 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035602#comment-16035602
 ] 

Nandakumar commented on HDFS-11777:
---

Initial version of patch uploaded, please review.

Thanks.

> Ozone: KSM: add deleteBucket
> 
>
> Key: HDFS-11777
> URL: https://issues.apache.org/jira/browse/HDFS-11777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Nandakumar
> Attachments: HDFS-11777-HDFS-7240.000.patch
>
>
> Allows a bucket to to be deleted if there are no keys in the bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035598#comment-16035598
 ] 

George Huang commented on HDFS-11912:
-

9.
There is only one entry in snapshottableDirectories list, which is the home 
directory. Still, many places in the test try to pick up a random entry from 
this list. Is it really needed?

=> See response for comment #8.

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035593#comment-16035593
 ] 

George Huang commented on HDFS-11912:
-

8.
// Get list of snapshottable directories
SnapshottableDirectoryStatus[] snapshottableDirectoryStatus = 
hdfs.getSnapshottableDirListing();
for (SnapshottableDirectoryStatus ssds : snapshottableDirectoryStatus) {
  snapshottableDirectories.add(ssds.getFullPath());
}
Above code is a no-op as the test hasn't set up any allow snapshots on any 
directories yet. Can we remove the block ?
=> Actually files and directories were already created from the call to 
createFiles() before lines listed above. At this time we are ready to get a 
list of snapshottable directories.

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7933) fsck should also report decommissioning replicas.

2017-06-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035587#comment-16035587
 ] 

Konstantin Shvachko commented on HDFS-7933:
---

May be we should only fix the reporting, that is make it report as before, but 
keep the compute logic in place. Re-purpose HDFS-11743 to do just that?

> fsck should also report decommissioning replicas. 
> --
>
> Key: HDFS-7933
> URL: https://issues.apache.org/jira/browse/HDFS-7933
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-7933.00.patch, HDFS-7933.01.patch, 
> HDFS-7933.02.patch, HDFS-7933.03.patch, HDFS-7933-branch-2.7.00.patch
>
>
> Fsck doesn't count replicas that are on decommissioning nodes. If a block has 
> all replicas on the decommissioning nodes, it will be marked as missing, 
> which is alarming for the admins, although the system will replicate them 
> before nodes are decommissioned.
> Fsck output should also show decommissioning replicas along with the live 
> replicas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035580#comment-16035580
 ] 

George Huang commented on HDFS-11912:
-

7.
private String GetNewPathString(String originalString,
Metnhod name should be in camel case, like getNewPathString()

=> Fixed. Many thanks!

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035572#comment-16035572
 ] 

George Huang commented on HDFS-11912:
-

6.
// Create files in a directory with random depth, ranging from 0-10.
for (int i = 0; i < TOTAL_BLOCKS; i += fileLength) {
Is this TOTAL_BLOCKS are total files ?

=> # of total files == TOTAL_BLOCKS /  fileLength

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035547#comment-16035547
 ] 

George Huang commented on HDFS-11912:
-

4.
// Set
Random RANDOM = new Random();
long seed = RANDOM.nextLong();
GENERATOR = new Random(seed);
Any specific reason why a simple seed like System.currentTimeMillis() will not 
be useful here ? Here seed is generated from random, which is inturn is not 
seeded. Also, RANDOM need not be all caps.

=> Removed 'RANDOM', using System.currentTimeMillis() instead. Thanks!

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035540#comment-16035540
 ] 

George Huang commented on HDFS-11912:
-

5.
int fileLen = new Random().nextInt(MAX_NUM_FILE_LENGTH);
createFiles(testDirString, fileLen);

GENERATOR random can be used here instead of creating a new one.

=> Fixed. Thanks.

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035535#comment-16035535
 ] 

George Huang commented on HDFS-11912:
-

2.   
private static MiniDFSCluster cluster;
=> Fixed

private static DistributedFileSystem hdfs;
=> Fixed

private static Random GENERATOR = null;
=> This one needs to be accessed from enum, so it needs to be static?

Above class members need not be static.


> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11737) Backport HDFS-7964 to branch-2.7: add support for async edit logging

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035515#comment-16035515
 ] 

Hadoop QA commented on HDFS-11737:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
4s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 11 new + 1231 unchanged - 37 fixed = 1242 total (was 1268) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1523 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
37s{color} | {color:red} The patch 124 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 33s{color} 
| {color:red} bkjournal in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035480#comment-16035480
 ] 

Xiaoyu Yao commented on HDFS-11796:
---

LGTM. +1 for patch v2.

> Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly
> -
>
> Key: HDFS-11796
> URL: https://issues.apache.org/jira/browse/HDFS-11796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11796-HDFS-7240.001.patch, 
> HDFS-11796-HDFS-7240.002.patch
>
>
> MiniOzoneCluster currently doest not set "ozone.handler.type" key correctly.
> Handler type passed with setHandlerType is ignored silently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Attachment: HDFS-11920-HDFS-7240.002.patch

Post v002 patch to fix findbug warning and asf license.

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11923:
--
Status: Patch Available  (was: Open)

> Stress test of DFSNetworkTopology
> -
>
> Key: HDFS-11923
> URL: https://issues.apache.org/jira/browse/HDFS-11923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11923.001.patch
>
>
> I wrote a stress test with {{DFSNetworkTopology}} to verify its correctness 
> under huge number of datanode changes e.g., data node insert/delete, storage 
> addition/removal etc. The goal is to show that the topology maintains the 
> correct counters all time. The test is written that, unless manually 
> terminated, it will keep randomly performing the operations nonstop. (and 
> because of this, the test is ignored in the patch).
> My local test lasted 40 min before I stopped it, it involved more than one 
> million datanode changes, and no error happened. We believe this should be 
> sufficient to show the correctness of {{DFSNetworkTopology}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11923:
--
Attachment: HDFS-11923.001.patch

> Stress test of DFSNetworkTopology
> -
>
> Key: HDFS-11923
> URL: https://issues.apache.org/jira/browse/HDFS-11923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11923.001.patch
>
>
> I wrote a stress test with {{DFSNetworkTopology}} to verify its correctness 
> under huge number of datanode changes e.g., data node insert/delete, storage 
> addition/removal etc. The goal is to show that the topology maintains the 
> correct counters all time. The test is written that, unless manually 
> terminated, it will keep randomly performing the operations nonstop. (and 
> because of this, the test is ignored in the patch).
> My local test lasted 40 min before I stopped it, it involved more than one 
> million datanode changes, and no error happened. We believe this should be 
> sufficient to show the correctness of {{DFSNetworkTopology}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11923:
-

 Summary: Stress test of DFSNetworkTopology
 Key: HDFS-11923
 URL: https://issues.apache.org/jira/browse/HDFS-11923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


I wrote a stress test with {{DFSNetworkTopology}} to verify its correctness 
under huge number of datanode changes e.g., data node insert/delete, storage 
addition/removal etc. The goal is to show that the topology maintains the 
correct counters all time. The test is written that, unless manually 
terminated, it will keep randomly performing the operations nonstop. (and 
because of this, the test is ignored in the patch).

My local test lasted 40 min before I stopped it, it involved more than one 
million datanode changes, and no error happened. We believe this should be 
sufficient to show the correctness of {{DFSNetworkTopology}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035405#comment-16035405
 ] 

Chen Liang commented on HDFS-11796:
---

Thanks [~anu] for the updates! +1 on v002  patch.

> Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly
> -
>
> Key: HDFS-11796
> URL: https://issues.apache.org/jira/browse/HDFS-11796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11796-HDFS-7240.001.patch, 
> HDFS-11796-HDFS-7240.002.patch
>
>
> MiniOzoneCluster currently doest not set "ozone.handler.type" key correctly.
> Handler type passed with setHandlerType is ignored silently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure

2017-06-02 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035397#comment-16035397
 ] 

Erik Krogen commented on HDFS-11472:


[~jojochuang] no problem! So I was actually wondering if, following the same 
reasoning as {{recoverRbwImpl}}, it may be better for 
{{initReplicaRecoveryImpl}} to check {{blockDataLength}} if {{bytesOnDisk}} is 
unexpected, something like this:

{code}
  //check replica bytes on disk.
  long bytesOnDisk = replica.getBytesOnDisk();
  if (bytesOnDisk < replica.getVisibleLength()) {
long dataLength = replica.getBlockDataLength();
if (bytesOnDisk != dataLength) {
  LOG.warn("replica recovery: replica.getBytesOnDisk() = " +
  replica.getBytesOnDisk() + " != " +
  "replica.getBlockDataLength() = " + dataLength +
  ", replica = " + replica);
  rip.setLastChecksumAndDataLen(dataLength, null);
}
if (replica.getBytesOnDisk() < replica.getVisibleLength()) {
  throw new IOException("THIS IS NOT SUPPOSED TO HAPPEN:"
  + " getBytesOnDisk() < getVisibleLength(), rip=" + replica);
}
  }
{code}
Do you think this makes sense?

> Fix inconsistent replica size after a data pipeline failure
> ---
>
> Key: HDFS-11472
> URL: https://issues.apache.org/jira/browse/HDFS-11472
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HDFS-11472.001.patch, HDFS-11472.002.patch, 
> HDFS-11472.003.patch, HDFS-11472.testcase.patch
>
>
> We observed a case where a replica's on disk length is less than acknowledged 
> length, breaking the assumption in recovery code.
> {noformat}
> 2017-01-08 01:41:03,532 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from 
> datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < 
> getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW
>   getNumBytes() = 27530
>   getBytesOnDisk()  = 27006
>   getVisibleLength()= 27268
>   getVolume()   = /data/6/hdfs/datanode/current
>   getBlockFile()= 
> /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952
>   bytesAcked=27268
>   bytesOnDisk=27006
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> It turns out that if an exception is thrown within 
> {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not 
> be updated, but the data is written to disk anyway.
> For example, here's one exception we observed
> {noformat}
> 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for 
> BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067
> java.nio.channels.ClosedByInterruptException
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> 

[jira] [Commented] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035375#comment-16035375
 ] 

Hadoop QA commented on HDFS-11472:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
55s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 120 unchanged - 1 fixed = 122 total (was 121) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11472 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871029/HDFS-11472.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f98298ced575 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73ecb19 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19755/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19755/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19755/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19755/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix inconsistent replica size after a data pipeline failure
> ---
>
> Key: HDFS-11472
> 

[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035371#comment-16035371
 ] 

Hadoop QA commented on HDFS-11920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 4 new + 10 
unchanged - 0 fixed = 14 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.ozone.web.storage.ChunkGroupInputStream.currentStreamIndex; 
locked 57% of time  Unsynchronized access at ChunkGroupInputStream.java:57% of 
time  Unsynchronized access at ChunkGroupInputStream.java:[line 43] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.ozone.web.storage.ChunkGroupInputStream$ChunkInputStreamEntry.currentPosition;
 locked 80% of time  Unsynchronized access at ChunkGroupInputStream.java:80% of 
time  Unsynchronized access at ChunkGroupInputStream.java:[line 124] |
|  |  Should 

[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035364#comment-16035364
 ] 

Hadoop QA commented on HDFS-11914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871020/HDFS-11914.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a71e1a5007c2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73ecb19 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19752/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19752/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19752/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more diagnosis info for fsimage transfer failure.
> 

[jira] [Comment Edited] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035359#comment-16035359
 ] 

Anu Engineer edited comment on HDFS-11796 at 6/2/17 8:25 PM:
-

with this new patch all tests pass while running this command. *mvn test 
-Dtest='org.apache.hadoop.ozone.\*\*.\*'* 


was (Author: anu):
with this new patch all test pass while running this command. *mvn test 
-Dtest='org.apache.hadoop.ozone.\*\*.\*'* 

> Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly
> -
>
> Key: HDFS-11796
> URL: https://issues.apache.org/jira/browse/HDFS-11796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11796-HDFS-7240.001.patch, 
> HDFS-11796-HDFS-7240.002.patch
>
>
> MiniOzoneCluster currently doest not set "ozone.handler.type" key correctly.
> Handler type passed with setHandlerType is ignored silently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035359#comment-16035359
 ] 

Anu Engineer commented on HDFS-11796:
-

with this new patch all test pass while running this command. *mvn test 
-Dtest='org.apache.hadoop.ozone.\*\*.\*'* 

> Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly
> -
>
> Key: HDFS-11796
> URL: https://issues.apache.org/jira/browse/HDFS-11796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11796-HDFS-7240.001.patch, 
> HDFS-11796-HDFS-7240.002.patch
>
>
> MiniOzoneCluster currently doest not set "ozone.handler.type" key correctly.
> Handler type passed with setHandlerType is ignored silently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11796) Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11796:

Attachment: HDFS-11796-HDFS-7240.002.patch

This patch extends the changes that is done by [~msingh] to fix the failing 
test cases.

The changes are :
# {{KeyManagerImpl.java}}
* Removed fail the call if the Key already exists, filed HDFS-11922 to track 
the issue of orphaned blocks and made the putkey to behave as the original 
Ozone spec. That is overwrite of key works.
* Make sure that allocateBlock request has at least 1 bytes in size. That was 
causing a failure in a testCase.
* Changed the DBException to catch Exception. When we passed 0 size, the 
precondition check was failing but we were not propagating that failure to the 
caller. This fixes that so the test case fails fast if there is  an exception, 
since it gets propagated to the rest client.
# {{MiniOzoneCluster.java}}
Original change made by mukul.
# {{TestKeySpaceManager.java}}
In oder for {{TestOzoneRestWithMiniCluster.java}}  pass with this change, we 
needed to make sure that overwrite works. Key space manager test assumed that 
overwrite would fail.  Changed that assumption in tests.
# {{TestOzoneRestWithMiniCluster.java}}
 Added wait for ozone cluster call.




> Ozone: MiniOzoneCluster should set "ozone.handler.type" key correctly
> -
>
> Key: HDFS-11796
> URL: https://issues.apache.org/jira/browse/HDFS-11796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11796-HDFS-7240.001.patch, 
> HDFS-11796-HDFS-7240.002.patch
>
>
> MiniOzoneCluster currently doest not set "ozone.handler.type" key correctly.
> Handler type passed with setHandlerType is ignored silently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035349#comment-16035349
 ] 

Wei-Chiu Chuang commented on HDFS-11914:


bq. About why using String.valueOf(), it's because I saw some code in the same 
method check if the fsImageName is null, which means there is a chance the 
fsimageName is null.
That shouldn't be necessary. The following code:
{code}
  String x = null;
  throw new IOException("abc" + x + "def");
{code}
Gives me
java.io.IOException: abcnulldef
It doesn't cause NPE.

> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch, 
> HDFS-11914.003.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11921) Ozone: KSM: Unable to put keys with zero length

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-11921:
---

Assignee: Anu Engineer

> Ozone: KSM: Unable to put keys with zero length
> ---
>
> Key: HDFS-11921
> URL: https://issues.apache.org/jira/browse/HDFS-11921
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
>
> As part of working on HDFS-11909, I was trying to put zero length keys. I 
> found that put key refuses to do that. Here is the call trace, 
> bq.   at ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock 
> we check if the block size is greater than 0, which makes sense since we 
> should not call into SCM to allocate a block of zero size.
> However these 2 calls are invoked for creating the key, so that metadata for 
> key can be created, we should probably take care of this behavior here.
> bq. ksm.KeyManagerImpl.allocateKey
> bq. ksm.KeySpaceManager.allocateKey(KeySpaceManager.java:428)
> Another way to fix this might be to just allocate a block with at least 1 
> byte always, which might be easier than special casing code.
> [~vagarychen] Would you like to fix this in the next patch you are working on 
> ? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-02 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035272#comment-16035272
 ] 

Manoj Govindassamy commented on HDFS-11912:
---

continuation of my review comments...
8. 
{noformat}
// Get list of snapshottable directories
SnapshottableDirectoryStatus[] snapshottableDirectoryStatus = 
hdfs.getSnapshottableDirListing();
for (SnapshottableDirectoryStatus ssds : snapshottableDirectoryStatus) {
  snapshottableDirectories.add(ssds.getFullPath());
}
{noformat}
Above code is a no-op as the test hasn't set up any allow snapshots on any 
directories yet. Can we remove the block ?

9.
There is only one entry in {{snapshottableDirectories}} list, which is the home 
directory. Still, many places in the test try to pick up a random entry from 
this list. Is it really needed?

10.
{noformat}
  /** Delete an existing test directory */
  private void deleteTestDir() throws IOException {
if (snapshottableDirectories.size() > 0) {
  int index = GENERATOR.nextInt(snapshottableDirectories.size());
  Path deleteDir = snapshottableDirectories.get(index);

  if (!pathToSnapshotsMap.containsKey(deleteDir)) {
   .. .. // deletion
  }
{noformat}
{{deleteTestDir}} and {{renameTestDir}} have the above model where the deletion 
or rename task is only performed when directory does not exist in 
{{pathToSnapshotsMap}}. But, pathToSnapshotsMap will always have this home dir 
after the first snapshot created. So, deletions and renames will always be 
skipped after the first snapshot taken? 

11. Can you please print the aggregated stats at the end of the test? There is 
a lot of logging happening for every task and we might miss the overall 
picture. It would be good to print the overall stats like total dir creates, 
total dir deletes, total file creates, renames, etc., before every cluster 
evaluation or the end of the test.

12. Please take care of the [checkstyle 
issues|https://builds.apache.org/job/PreCommit-HDFS-Build/19732/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt].
 Most of them are related to the line exceeding 80 chars or the indentation not 
proper. 


> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Priority: Minor
> Attachments: HDFS-11912.001.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11922) Ozone: KSM: Garbage collect deleted blocks

2017-06-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11922:
---

 Summary: Ozone: KSM: Garbage collect deleted blocks
 Key: HDFS-11922
 URL: https://issues.apache.org/jira/browse/HDFS-11922
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Anu Engineer
Priority: Critical


We need to garbage collect deleted blocks from the Datanodes. There are two 
cases where we will have orphaned blocks. One is like the classical HDFS, where 
someone deletes a key and we need to delete the corresponding blocks.

Another case, is when someone overwrites a key -- an overwrite can be treated 
as a delete and a new put -- that means that older blocks need to be GC-ed at 
some point of time. 

Couple of JIRAs has discussed this in one form or another -- so consolidating 
all those discussions in this JIRA. 

HDFS-11796 -- needs to fix this issue for some tests to pass 
HDFS-11780 -- changed the old overwriting behavior to not supporting this 
feature for time being.
HDFS-11920 - Once again runs into this issue when user tries to put an existing 
key.
HDFS-11781 - delete key API in KSM only deletes the metadata -- and relies on 
GC for Datanodes. 

When we solve this issue, we should also consider 2 more aspects. 

One, we support versioning in the buckets, tracking which blocks are really 
orphaned is something that KSM will do. So delete and overwrite at some point 
needs to decide how to handle versioning of buckets.

Two, If a key exists in a closed container, then it is immutable, hence the 
strategy of removing the key might be more complex than just talking to an open 
container.
cc : [~xyao], [~cheersyang], [~vagarychen], [~msingh], [~yuanbo], [~szetszwo], 
[~nandakumar131]

 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure

2017-06-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11472:
---
Attachment: HDFS-11472.003.patch

Rev 003. I am really sorry about that. I thought I had removed that part. 
Attached a new patch to address the comment, for real.

> Fix inconsistent replica size after a data pipeline failure
> ---
>
> Key: HDFS-11472
> URL: https://issues.apache.org/jira/browse/HDFS-11472
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HDFS-11472.001.patch, HDFS-11472.002.patch, 
> HDFS-11472.003.patch, HDFS-11472.testcase.patch
>
>
> We observed a case where a replica's on disk length is less than acknowledged 
> length, breaking the assumption in recovery code.
> {noformat}
> 2017-01-08 01:41:03,532 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from 
> datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < 
> getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW
>   getNumBytes() = 27530
>   getBytesOnDisk()  = 27006
>   getVisibleLength()= 27268
>   getVolume()   = /data/6/hdfs/datanode/current
>   getBlockFile()= 
> /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952
>   bytesAcked=27268
>   bytesOnDisk=27006
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> It turns out that if an exception is thrown within 
> {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not 
> be updated, but the data is written to disk anyway.
> For example, here's one exception we observed
> {noformat}
> 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for 
> BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067
> java.nio.channels.ClosedByInterruptException
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There are potentially other places and causes where an exception is thrown 
> within {{BlockReceiver#receivePacket}}, so it may not make much sense to 
> alleviate it for this particular exception. Instead, we should improve 
> replica recovery code to handle the case where ondisk size is less than 
> acknowledged size, and update in-memory checksum accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11921) Ozone: KSM: Unable to put keys with zero length

2017-06-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11921:
---

 Summary: Ozone: KSM: Unable to put keys with zero length
 Key: HDFS-11921
 URL: https://issues.apache.org/jira/browse/HDFS-11921
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Priority: Minor


As part of working on HDFS-11909, I was trying to put zero length keys. I found 
that put key refuses to do that. Here is the call trace, 

bq. at ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock 

we check if the block size is greater than 0, which makes sense since we should 
not call into SCM to allocate a block of zero size.

However these 2 calls are invoked for creating the key, so that metadata for 
key can be created, we should probably take care of this behavior here.
bq. ksm.KeyManagerImpl.allocateKey
bq. ksm.KeySpaceManager.allocateKey(KeySpaceManager.java:428)

Another way to fix this might be to just allocate a block with at least 1 byte 
always, which might be easier than special casing code.

[~vagarychen] Would you like to fix this in the next patch you are working on ? 





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035204#comment-16035204
 ] 

Chen Liang edited comment on HDFS-11920 at 6/2/17 6:34 PM:
---

Post initial v001 patch. [~anu], [~xyao] would you mind taking a look when you 
get a chance? thanks!


was (Author: vagarychen):
Post initial v001 patch. [~anu] [~xyao] would you mind taking a look when you 
get a chance? thanks!

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035204#comment-16035204
 ] 

Chen Liang edited comment on HDFS-11920 at 6/2/17 6:33 PM:
---

Post initial v001 patch. [~anu] [~xyao] would you mind taking a look when you 
get a chance? thanks!


was (Author: vagarychen):
Post initial v001 patch

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Attachment: HDFS-11920-HDFS-7240.001.patch

Post initial v001 patch

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Status: Patch Available  (was: Open)

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11920:
-

 Summary: Ozone : add key partition
 Key: HDFS-11920
 URL: https://issues.apache.org/jira/browse/HDFS-11920
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


Currently, each key corresponds to one single SCM block, and putKey/getKey 
writes/reads to this single SCM block. This works fine for keys with reasonably 
small data size. However if the data is too huge, (e.g. not even fits into a 
single container), then we need to be able to partition the key data into 
multiple blocks, each in one container. This JIRA changes the key-related 
classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11737) Backport HDFS-7964 to branch-2.7: add support for async edit logging

2017-06-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-11737:
-
Attachment: HDFS-11737-branch-2.7.00.patch

> Backport HDFS-7964 to branch-2.7: add support for async edit logging
> 
>
> Key: HDFS-11737
> URL: https://issues.apache.org/jira/browse/HDFS-11737
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
> Attachments: HDFS-11737-branch-2.7.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11898) DFSClient#isHedgedReadsEnabled() should be per client flag

2017-06-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035200#comment-16035200
 ] 

John Zhuge commented on HDFS-11898:
---

+1 LGTM

Ok to continue the static pool discussion in HDFS-11900.

> DFSClient#isHedgedReadsEnabled() should be per client flag 
> ---
>
> Key: HDFS-11898
> URL: https://issues.apache.org/jira/browse/HDFS-11898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11898-01.patch, HDFS-11898-02.patch
>
>
> DFSClient#isHedgedReadsEnabled() returns value based on static 
> {{HEDGED_READ_THREAD_POOL}}. 
> Hence if any of the client initialized this in JVM, all remaining client 
> reads will be going through hedged read itself.
> This flag should be per client value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11737) Backport HDFS-7964 to branch-2.7: add support for async edit logging

2017-06-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-11737:
-
Status: Patch Available  (was: Open)

> Backport HDFS-7964 to branch-2.7: add support for async edit logging
> 
>
> Key: HDFS-11737
> URL: https://issues.apache.org/jira/browse/HDFS-11737
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11919) Ozone: SCM: TestNodeManager takes too long to execute

2017-06-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11919:
---

 Summary: Ozone: SCM: TestNodeManager takes too long to execute
 Key: HDFS-11919
 URL: https://issues.apache.org/jira/browse/HDFS-11919
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Priority: Trivial


On my laptop it takes 97.645 seconds to execute this test. We should explore if 
we can make this test run faster. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11779) Ozone: KSM: add listBuckets

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035176#comment-16035176
 ] 

Hadoop QA commented on HDFS-11779:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11779 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870994/HDFS-11779-HDFS-7240.012.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Updated] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-11914:
-
Attachment: HDFS-11914.003.patch

> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch, 
> HDFS-11914.003.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035128#comment-16035128
 ] 

Yongjun Zhang commented on HDFS-11914:
--

Thanks [~jojochuang], good comments.

I'm uploading a patch that address the first two comments, About why using 
String.valueOf(), it's because I saw some code in the same method check if the 
fsImageName is null, which means there is a chance the fsimageName is null. I 
just want to be safe here.


> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-02 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035112#comment-16035112
 ] 

Chen Liang edited comment on HDFS-11907 at 6/2/17 5:51 PM:
---

Thanks [~andrew.wang] for the reply!

df does seem to be a fairly cheap operation in general, but we've seen cases 
where we suspect it was this call being slow under certain conditions, which we 
are still doing analysis. About changing monitorHealth check interval, since we 
still want ZKFC process to try to contact NameNode frequently enough to detect 
process failures ASAP, we probably don't want to lower the frequency from 
caller's side.


was (Author: vagarychen):
Thanks [~andrew.wang] for the reply!

df does seem to be a fairly cheap operation in general, but we've seen cases 
where we suspect it was this call being slow under certain conditions, which we 
are still doing analysis. About changing monitorHealth check interval, since we 
still want ZKFC process to try to contact NameNode frequently enough to detect 
failures ASAP, we probably don't want to lower the frequency from caller's side.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-02 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035112#comment-16035112
 ] 

Chen Liang edited comment on HDFS-11907 at 6/2/17 5:51 PM:
---

Thanks [~andrew.wang] for the reply!

df does seem to be a fairly cheap operation in general, but we've seen cases 
where we suspect it was this call being slow under certain conditions, which we 
are still doing analysis. About changing monitorHealth check interval, since we 
still want ZKFC process to try to contact NameNode frequently enough to detect 
process crash ASAP, we probably don't want to lower the frequency from caller's 
side.


was (Author: vagarychen):
Thanks [~andrew.wang] for the reply!

df does seem to be a fairly cheap operation in general, but we've seen cases 
where we suspect it was this call being slow under certain conditions, which we 
are still doing analysis. About changing monitorHealth check interval, since we 
still want ZKFC process to try to contact NameNode frequently enough to detect 
process failures ASAP, we probably don't want to lower the frequency from 
caller's side.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035123#comment-16035123
 ] 

Anu Engineer commented on HDFS-11913:
-

Converted this to a sub-task of ozone. resolving  now.

> Ozone: TestKeySpaceManager#testDeleteVolume fails
> -
>
> Key: HDFS-11913
> URL: https://issues.apache.org/jira/browse/HDFS-11913
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11913-HDFS-7240.001.patch
>
>
> HDFS-11774 introduces an UT failure, 
> {{TestKeySpaceManager#testDeleteVolume}}, error as below
> {noformat}
> java.util.NoSuchElementException
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
>  at 
> org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
>  at 
> org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
>  at 
> org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
>  at 
> org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
>  at 
> org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
> {noformat}
> this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, 
> there are 2 issues need to be fixed
> # Iterate next element will throw this exception if it doesn't have next. 
> This always fail when a volume is empty.
> # The code was checking if the first bucket name start with "/volume_name", 
> this will return a wrong value if I have several empty volumes with same 
> prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return 
> false as the next element from "/volA/" is not a bucket, it's another volume 
> "/volAA/" but matches the prefix.
> For now an empty volume with name "/volA/" is probably not valid, but if we 
> make sure our bucket key starts with "/volA/" instead of just "/volA" is a 
> good idea to leave us away from weird problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11913:

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-7240

> Ozone: TestKeySpaceManager#testDeleteVolume fails
> -
>
> Key: HDFS-11913
> URL: https://issues.apache.org/jira/browse/HDFS-11913
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11913-HDFS-7240.001.patch
>
>
> HDFS-11774 introduces an UT failure, 
> {{TestKeySpaceManager#testDeleteVolume}}, error as below
> {noformat}
> java.util.NoSuchElementException
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
>  at 
> org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
>  at 
> org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
>  at 
> org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
>  at 
> org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
>  at 
> org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
> {noformat}
> this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, 
> there are 2 issues need to be fixed
> # Iterate next element will throw this exception if it doesn't have next. 
> This always fail when a volume is empty.
> # The code was checking if the first bucket name start with "/volume_name", 
> this will return a wrong value if I have several empty volumes with same 
> prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return 
> false as the next element from "/volA/" is not a bucket, it's another volume 
> "/volAA/" but matches the prefix.
> For now an empty volume with name "/volA/" is probably not valid, but if we 
> make sure our bucket key starts with "/volA/" instead of just "/volA" is a 
> good idea to leave us away from weird problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-11913.
-
Resolution: Fixed

> Ozone: TestKeySpaceManager#testDeleteVolume fails
> -
>
> Key: HDFS-11913
> URL: https://issues.apache.org/jira/browse/HDFS-11913
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11913-HDFS-7240.001.patch
>
>
> HDFS-11774 introduces an UT failure, 
> {{TestKeySpaceManager#testDeleteVolume}}, error as below
> {noformat}
> java.util.NoSuchElementException
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
>  at 
> org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
>  at 
> org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
>  at 
> org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
>  at 
> org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
>  at 
> org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
> {noformat}
> this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, 
> there are 2 issues need to be fixed
> # Iterate next element will throw this exception if it doesn't have next. 
> This always fail when a volume is empty.
> # The code was checking if the first bucket name start with "/volume_name", 
> this will return a wrong value if I have several empty volumes with same 
> prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return 
> false as the next element from "/volA/" is not a bucket, it's another volume 
> "/volAA/" but matches the prefix.
> For now an empty volume with name "/volA/" is probably not valid, but if we 
> make sure our bucket key starts with "/volA/" instead of just "/volA" is a 
> good idea to leave us away from weird problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDFS-11913:
-

> Ozone: TestKeySpaceManager#testDeleteVolume fails
> -
>
> Key: HDFS-11913
> URL: https://issues.apache.org/jira/browse/HDFS-11913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11913-HDFS-7240.001.patch
>
>
> HDFS-11774 introduces an UT failure, 
> {{TestKeySpaceManager#testDeleteVolume}}, error as below
> {noformat}
> java.util.NoSuchElementException
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
>  at 
> org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
>  at 
> org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
>  at 
> org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
>  at 
> org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
>  at 
> org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
> {noformat}
> this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, 
> there are 2 issues need to be fixed
> # Iterate next element will throw this exception if it doesn't have next. 
> This always fail when a volume is empty.
> # The code was checking if the first bucket name start with "/volume_name", 
> this will return a wrong value if I have several empty volumes with same 
> prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return 
> false as the next element from "/volA/" is not a bucket, it's another volume 
> "/volAA/" but matches the prefix.
> For now an empty volume with name "/volA/" is probably not valid, but if we 
> make sure our bucket key starts with "/volA/" instead of just "/volA" is a 
> good idea to leave us away from weird problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) NameNodeResourceChecker should avoid calling df.getAvailable too frequently

2017-06-02 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035112#comment-16035112
 ] 

Chen Liang commented on HDFS-11907:
---

Thanks [~andrew.wang] for the reply!

df does seem to be a fairly cheap operation in general, but we've seen cases 
where we suspect it was this call being slow under certain conditions, which we 
are still doing analysis. About changing monitorHealth check interval, since we 
still want ZKFC process to try to contact NameNode frequently enough to detect 
failures ASAP, we probably don't want to lower the frequency from caller's side.

> NameNodeResourceChecker should avoid calling df.getAvailable too frequently
> ---
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch
>
>
> Currently, {{HealthMonitor#doHealthChecks}} invokes 
> {{NameNode#monitorHealth}} which ends up invoking 
> {{NameNodeResourceChecker#isResourceAvailable}}, at the frequency of once per 
> second by default. And NameNodeResourceChecker#isResourceAvailable invokes 
> {{df.getAvailable();}} every time it is called.
> Since available space information should rarely be changing dramatically at 
> the pace of per second. A cached value should be sufficient. i.e. only try to 
> get the updated value when the cached value is too old. otherwise simply 
> return the cached value. This way df.getAvailable() gets invoked less.
> Thanks [~arpitagarwal] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11737) ibdcgditutibrcn

2017-06-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-11737:
-
Summary: ibdcgditutibrcn  (was: Backport HDFS-7964 to branch-2.7: add 
support for async edit logging)

> ibdcgditutibrcn
> ---
>
> Key: HDFS-11737
> URL: https://issues.apache.org/jira/browse/HDFS-11737
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11737) Backport HDFS-7964 to branch-2.7: add support for async edit logging

2017-06-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-11737:
-
Summary: Backport HDFS-7964 to branch-2.7: add support for async edit 
logging  (was: ibdcgditutibrcn)

> Backport HDFS-7964 to branch-2.7: add support for async edit logging
> 
>
> Key: HDFS-11737
> URL: https://issues.apache.org/jira/browse/HDFS-11737
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035093#comment-16035093
 ] 

Wei-Chiu Chuang commented on HDFS-11914:


Hi [~yzhangal]
I reviewed the patch and mostly good. I suggest improve the readability of the 
log a little bit:
in {{copyFileToStream}}:
"Connection closed by client. Sent total=". add " bytes." at the end.
" Size of last segment possibly sent=" can be rephrased as " Size of last 
segment intended to send="

Could you also explain why use String.valueOf() to print fsImageName, a String?

Thanks!

> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035091#comment-16035091
 ] 

Hadoop QA commented on HDFS-11914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m 
12s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870995/HDFS-11914.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1d0eaea805cc 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 056cc72 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19751/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19751/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client 

[jira] [Commented] (HDFS-11905) Fix license header inconsistency in hdfs

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035081#comment-16035081
 ] 

Hadoop QA commented on HDFS-11905:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Updated] (HDFS-11887) XceiverClientManager should close XceiverClient on eviction from cache

2017-06-02 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11887:
-
Description: 
XceiverClientManager doesn't close client on eviction which can leak resources.

{code}
public XceiverClientManager(Configuration conf) {
.
.
.
public void onRemoval(
RemovalNotification
  removalNotification) {
  // If the reference count is not 0, this xceiver client should not
  // be evicted, add it back to the cache.
  WithAccessInfo info = removalNotification.getValue();
  if (info.hasRefence()) {
synchronized (XceiverClientManager.this.openClient) {
  XceiverClientManager.this
  .openClient.put(removalNotification.getKey(), info);
}
  }
{code}

Also a stack overflow can be triggered because of putting the element back in 
the cache on eviction.
{code}
synchronized (XceiverClientManager.this.openClient) {
  XceiverClientManager.this
  .openClient.put(removalNotification.getKey(), info);
}
{code}

This bug will try to fix both of these cases.

  was:
XceiverClientManager doesn't close client on eviction which can leak resources.

{code}
public XceiverClientManager(Configuration conf) {
.
.
.
public void onRemoval(
RemovalNotification
  removalNotification) {
  // If the reference count is not 0, this xceiver client should not
  // be evicted, add it back to the cache.
  WithAccessInfo info = removalNotification.getValue();
  if (info.hasRefence()) {
synchronized (XceiverClientManager.this.openClient) {
  XceiverClientManager.this
  .openClient.put(removalNotification.getKey(), info);
}
  }
{code}


> XceiverClientManager should close XceiverClient on eviction from cache
> --
>
> Key: HDFS-11887
> URL: https://issues.apache.org/jira/browse/HDFS-11887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11887-HDFS-7240.001.patch, 
> HDFS-11887-HDFS-7240.002.patch
>
>
> XceiverClientManager doesn't close client on eviction which can leak 
> resources.
> {code}
> public XceiverClientManager(Configuration conf) {
> .
> .
> .
> public void onRemoval(
> RemovalNotification
>   removalNotification) {
>   // If the reference count is not 0, this xceiver client should 
> not
>   // be evicted, add it back to the cache.
>   WithAccessInfo info = removalNotification.getValue();
>   if (info.hasRefence()) {
> synchronized (XceiverClientManager.this.openClient) {
>   XceiverClientManager.this
>   .openClient.put(removalNotification.getKey(), info);
> }
>   }
> {code}
> Also a stack overflow can be triggered because of putting the element back in 
> the cache on eviction.
> {code}
> synchronized (XceiverClientManager.this.openClient) {
>   XceiverClientManager.this
>   .openClient.put(removalNotification.getKey(), info);
> }
> {code}
> This bug will try to fix both of these cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure

2017-06-02 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035058#comment-16035058
 ] 

Erik Krogen commented on HDFS-11472:


[~jojochuang] in v002 patch I see your change to check for RBW but I don't see 
any changes related to my comment, did you forget to include?

> Fix inconsistent replica size after a data pipeline failure
> ---
>
> Key: HDFS-11472
> URL: https://issues.apache.org/jira/browse/HDFS-11472
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HDFS-11472.001.patch, HDFS-11472.002.patch, 
> HDFS-11472.testcase.patch
>
>
> We observed a case where a replica's on disk length is less than acknowledged 
> length, breaking the assumption in recovery code.
> {noformat}
> 2017-01-08 01:41:03,532 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from 
> datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < 
> getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW
>   getNumBytes() = 27530
>   getBytesOnDisk()  = 27006
>   getVisibleLength()= 27268
>   getVolume()   = /data/6/hdfs/datanode/current
>   getBlockFile()= 
> /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952
>   bytesAcked=27268
>   bytesOnDisk=27006
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> It turns out that if an exception is thrown within 
> {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not 
> be updated, but the data is written to disk anyway.
> For example, here's one exception we observed
> {noformat}
> 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for 
> BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067
> java.nio.channels.ClosedByInterruptException
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There are potentially other places and causes where an exception is thrown 
> within {{BlockReceiver#receivePacket}}, so it may not make much sense to 
> alleviate it for this particular exception. Instead, we should improve 
> replica recovery code to handle the case where ondisk size is less than 
> acknowledged size, and update in-memory checksum accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11887) XceiverClientManager should close XceiverClient on eviction from cache

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035049#comment-16035049
 ] 

Hadoop QA commented on HDFS-11887:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | org.apache.hadoop.ozone.scm.node.TestNodeManager |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11887 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870985/HDFS-11887-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (HDFS-11887) XceiverClientManager should close XceiverClient on eviction from cache

2017-06-02 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035039#comment-16035039
 ] 

Mukul Kumar Singh commented on HDFS-11887:
--

Hi [~cheersyang], thanks for the review.
My apologies that I didn't update the context on this bug. There are two 
problems in this bug which I found out while solving the issue mentioned in the 
bug.

Issues fixed in this patch
===
1) There is a StackOverFlow which happens when the client is re-enqueued in 
case there is reference on the client.
This will happen because there is not enough space in the cache, and adding the 
*removed* element back will trigger the evict operation again.
This will cause the stack overflow error. Hence re-adding an element to cache 
on eviction is not right as it might trigger a stack overflow.

For an example please look at the following test results for TestBufferManager
https://builds.apache.org/job/PreCommit-HDFS-Build/19742/testReport/org.apache.hadoop.cblock/TestBufferManager/testRepeatedBlockWrites/

2) The second issue is the same as the one mentioned in the title of this bug, 
which is about closing the client when there are no references on it.

Details about the patch
==
With the updated patch on the bug, the idea is to close the client session only 
when
1) it has been evicted from the cache
2) there are no references on the client.

This will ensure that client is not closed while
a) there are any pending operations on the client (because of reference count)
b) the client has been evicted from cache (This element has been deemed to be 
not usable in cache, "evict cache flag")

Having the combination of both these checks will ensure that
a) no clients are leaked
b) no active references on the cache will work on a closed client

Comments
==
1) as explained earlier, if there are more references on the client, it will be 
closed when all the references drop down to 0.
2) I agree to the point, even I wanted to keep the interface same as earlier, 
but releaseClient now needs to work on RefCountedXceiverClient, hence the 
change.
3) The client still works because there are still active references on it, 
hence it hasn't been closed. However after releasing the client on line#134, 
the client is closed and then the call fails.

In summary, this patch tries to ensure that
1) clients are not leaked
2) there are no errors in the code (stack overflow)
3) clients are kept in cache to ensure that they are not recreated and also so 
that clients are reused effectively.

Please let me know if any more details are needed. I should had added them 
while updating the bug.





> XceiverClientManager should close XceiverClient on eviction from cache
> --
>
> Key: HDFS-11887
> URL: https://issues.apache.org/jira/browse/HDFS-11887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11887-HDFS-7240.001.patch, 
> HDFS-11887-HDFS-7240.002.patch
>
>
> XceiverClientManager doesn't close client on eviction which can leak 
> resources.
> {code}
> public XceiverClientManager(Configuration conf) {
> .
> .
> .
> public void onRemoval(
> RemovalNotification
>   removalNotification) {
>   // If the reference count is not 0, this xceiver client should 
> not
>   // be evicted, add it back to the cache.
>   WithAccessInfo info = removalNotification.getValue();
>   if (info.hasRefence()) {
> synchronized (XceiverClientManager.this.openClient) {
>   XceiverClientManager.this
>   .openClient.put(removalNotification.getKey(), info);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11743) Revert HDFS-7933 from branch-2.7 (fsck reporting decommissioning replicas)

2017-06-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-11743:
-
Labels: release-blocker  (was: )

> Revert HDFS-7933 from branch-2.7 (fsck reporting decommissioning replicas)
> --
>
> Key: HDFS-11743
> URL: https://issues.apache.org/jira/browse/HDFS-11743
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
>  Labels: release-blocker
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-7295) Support arbitrary max expiration times for delegation token

2017-06-02 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot reassigned HDFS-7295:
---

Assignee: (was: Anubhav Dhoot)

> Support arbitrary max expiration times for delegation token
> ---
>
> Key: HDFS-7295
> URL: https://issues.apache.org/jira/browse/HDFS-7295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>
> Currently the max lifetime of HDFS delegation tokens is hardcoded to 7 days. 
> This is a problem for different users of HDFS such as long running YARN apps. 
> Users should be allowed to optionally specify max lifetime for their tokens.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11887) XceiverClientManager should close XceiverClient on eviction from cache

2017-06-02 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035000#comment-16035000
 ] 

Weiwei Yang commented on HDFS-11887:


Hi [~msingh]

I thought the purpose of this jira is to fix the xceiverClient leak, but it 
seems like a bit deviate now. The original code seems easy to fix, in 
{{onRemoval}}

{code}
if (info.hasRefence()) {
  // still has reference, do not remove from cache
} else {
  // close the client and evict from cache < missing this ?
}
{code}

The code you modified seems still works similar as before, but add a lot of 
changes.. hmm, do we really need to do that? What else has improved or fixed 
other than the resource leak problem? Please elaborate.

And some comments to the code

# {{XceiverClientManager.java}} line 91 {{removalNotification.getValue()}} 
returns the instance is going to be removed, lets say clientX. If there is more 
references in clientX, clientX will not be closed but it will be removed from 
cache. Still has leak?
# {{RefCountedXceiverClient}} is supposed to be internal notion in 
{{XceiverClientManager}}, I am not sure why we need to expose this to clients.  
I prefer not to change the return value of {{acquireClient}} to make it more 
API friendly.
# {{TestXceiverClientManager#testFreeByReference}} line 126 XceiverClient for 
containerName1 is evicted, line 130 verifies client1 still works. This seems to 
against the idea, is it supposed to keep client1 in cache instead of evicting 
it in this case (still has reference)?

Recap the problem, what we want here is 
# XceiverClientManager manages a bunch of XceiverClients (per container) in 
cache, each XceiverClients can be reused by multiple clients if they want to 
access same container.
# XceiverClientManager needs to make sure a XceiverClients won't be removed 
from cache as long as there still has client using it (avoid heavy operation 
that recreates a connection).
# If a XceiverClients is removed from cache, guarantees it is closed to avoid 
resource leak. Let me know if we are on the same page.

I feel we might not be on same page for this, please share your thoughts. Let 
me know if I miss anything. Thank you.

> XceiverClientManager should close XceiverClient on eviction from cache
> --
>
> Key: HDFS-11887
> URL: https://issues.apache.org/jira/browse/HDFS-11887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11887-HDFS-7240.001.patch, 
> HDFS-11887-HDFS-7240.002.patch
>
>
> XceiverClientManager doesn't close client on eviction which can leak 
> resources.
> {code}
> public XceiverClientManager(Configuration conf) {
> .
> .
> .
> public void onRemoval(
> RemovalNotification
>   removalNotification) {
>   // If the reference count is not 0, this xceiver client should 
> not
>   // be evicted, add it back to the cache.
>   WithAccessInfo info = removalNotification.getValue();
>   if (info.hasRefence()) {
> synchronized (XceiverClientManager.this.openClient) {
>   XceiverClientManager.this
>   .openClient.put(removalNotification.getKey(), info);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-11914:
-
Issue Type: Improvement  (was: Bug)

> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-11914:
-
Labels: supportability  (was: )

> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034880#comment-16034880
 ] 

Yongjun Zhang commented on HDFS-11914:
--

Uploaded rev 002 to include some more info.

The failed tests are not related, running them locally succeeded.


> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-11914:
-
Attachment: HDFS-11914.002.patch

> Add more diagnosis info for fsimage transfer failure.
> -
>
> Key: HDFS-11914
> URL: https://issues.apache.org/jira/browse/HDFS-11914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-11914.001.patch, HDFS-11914.002.patch
>
>
> Hit a fsimage download problem:
> Client tries to download fsimage, and got:
>  WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
> File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
> xyz is not of the advertised size abc.
> Basically client does not get enough fsimage data and finished prematurely 
> without any exception thrown, until it finds the size of data received is 
> smaller than expected. The client then closed the conenction to NN, that 
> caused NN to report
> INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection 
> closed by client
> This jira is to add some more information in logs to help debugging the 
> sitaution. Specifically, report the stack trace when the connection is 
> closed. And how much data has been sent at that point. etc.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11779) Ozone: KSM: add listBuckets

2017-06-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11779:
---
Attachment: HDFS-11779-HDFS-7240.012.patch

Rebase again...

> Ozone: KSM: add listBuckets
> ---
>
> Key: HDFS-11779
> URL: https://issues.apache.org/jira/browse/HDFS-11779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11779-HDFS-7240.001.patch, 
> HDFS-11779-HDFS-7240.002.patch, HDFS-11779-HDFS-7240.003.patch, 
> HDFS-11779-HDFS-7240.004.patch, HDFS-11779-HDFS-7240.005.patch, 
> HDFS-11779-HDFS-7240.006.patch, HDFS-11779-HDFS-7240.007.patch, 
> HDFS-11779-HDFS-7240.008.patch, HDFS-11779-HDFS-7240.009.patch, 
> HDFS-11779-HDFS-7240.010.patch, HDFS-11779-HDFS-7240.011.patch, 
> HDFS-11779-HDFS-7240.012.patch
>
>
> Lists buckets of a given volume. Similar to listVolumes, paging supported via 
> prevKey, prefix and maxKeys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11781) Ozone: KSM: Add deleteKey

2017-06-02 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034850#comment-16034850
 ] 

Weiwei Yang edited comment on HDFS-11781 at 6/2/17 3:17 PM:


Hi [~yuanbo]

Thanks for the update. But the modification you did for {{BlockManager}} seems 
wrong,

{code}
AllocatedBlock allocateBlock(long size, String blockKey) throws IOException;
{code}

according to the API spec defined in HDFS-11504. {{allocateBlock}} is supposed 
to generate a {{blockID}} while allocating a new block, and store that SCM DB. 
This key is not the {{objectKey}}. KSM maintains a {{KsmKeyInfo}} for each 
{{objectKey}} in its DB, it has a field to retrieve the {{blockID}} for a given 
{{objectKey}}, you can use that to query SCM for block locations.

So 

{code}
Set keys = new HashSet<>();
keys.add(objectKeyStr);
List resultList = scmBlockClient.deleteBlocks(keys);
{code}

can be modified to something like

{code}
KsmKeyInfo keyInfo = lookupKey(ksmKeyArgs);
String bockID = keyInfo.getBlockID();
List resultList = 
scmBlockClient.deleteBlocks(Collections.singleton(bockID));
{code}

Let me know if this makes sense. Thanks


was (Author: cheersyang):
Hi [~yuanbo]

Thanks for the update. But the modification you did for {{BlockManager}} seems 
wrong,

{code}
AllocatedBlock allocateBlock(long size, String blockKey) throws IOException;
{code}

according to the API spec defined in HDFS-11504. {{allocateBlock}} is supposed 
to generate a {{blockID}} while allocating a new block, and store that SCM DB. 
This key is not the {{objectKey}}. KSM maintains a {{KsmKeyInfo}} for each 
{{objectKey}} in its DB, it has a field to retrieve the {{blockID}} for a given 
{{objectKey}}, you can use that to query SCM for block locations.

So 

{code}
Set keys = new HashSet<>();
keys.add(objectKeyStr);
List resultList = scmBlockClient.deleteBlocks(keys);
{code}

can be modified to something like

{code}
KsmKeyInfo keyInfo = lookupKey(ksmKeyArgs);
String bockID = keyInfo.getBlockID
List resultList = 
scmBlockClient.deleteBlocks(Collections.singleton(bockID));
{code}

Let me know if this makes sense. Thanks

> Ozone: KSM: Add deleteKey
> -
>
> Key: HDFS-11781
> URL: https://issues.apache.org/jira/browse/HDFS-11781
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Attachments: HDFS-11781-HDFS-7240.001.patch, 
> HDFS-11781-HDFS-7240.002.patch, HDFS-11781-HDFS-7240.003.patch
>
>
> Add support for removing a key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11781) Ozone: KSM: Add deleteKey

2017-06-02 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034850#comment-16034850
 ] 

Weiwei Yang commented on HDFS-11781:


Hi [~yuanbo]

Thanks for the update. But the modification you did for {{BlockManager}} seems 
wrong,

{code}
AllocatedBlock allocateBlock(long size, String blockKey) throws IOException;
{code}

according to the API spec defined in HDFS-11504. {{allocateBlock}} is supposed 
to generate a {{blockID}} while allocating a new block, and store that SCM DB. 
This key is not the {{objectKey}}. KSM maintains a {{KsmKeyInfo}} for each 
{{objectKey}} in its DB, it has a field to retrieve the {{blockID}} for a given 
{{objectKey}}, you can use that to query SCM for block locations.

So 

{code}
Set keys = new HashSet<>();
keys.add(objectKeyStr);
List resultList = scmBlockClient.deleteBlocks(keys);
{code}

can be modified to something like

{code}
KsmKeyInfo keyInfo = lookupKey(ksmKeyArgs);
String bockID = keyInfo.getBlockID
List resultList = 
scmBlockClient.deleteBlocks(Collections.singleton(bockID));
{code}

Let me know if this makes sense. Thanks

> Ozone: KSM: Add deleteKey
> -
>
> Key: HDFS-11781
> URL: https://issues.apache.org/jira/browse/HDFS-11781
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Attachments: HDFS-11781-HDFS-7240.001.patch, 
> HDFS-11781-HDFS-7240.002.patch, HDFS-11781-HDFS-7240.003.patch
>
>
> Add support for removing a key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11905) Fix license header inconsistency in hdfs

2017-06-02 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034798#comment-16034798
 ] 

Yeliang Cang commented on HDFS-11905:
-

Thank you for bringing out the the pre-commit branch-2 jekins build! I was 
waiting for it...
And thank you for your reviewing, [~brahmareddy] 

> Fix license header inconsistency in hdfs
> 
>
> Key: HDFS-11905
> URL: https://issues.apache.org/jira/browse/HDFS-11905
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11905-001.patch, HDFS-11905-branch-2.001.patch
>
>
> I have written a shell script to find license errors in hadoop, mapreduce, 
> yarn and hdfs. An error still remains!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11905) Fix license header inconsistency in hdfs

2017-06-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11905:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to {{branch-2}} also.[~Cyl] thanks for your contribution.

> Fix license header inconsistency in hdfs
> 
>
> Key: HDFS-11905
> URL: https://issues.apache.org/jira/browse/HDFS-11905
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11905-001.patch, HDFS-11905-branch-2.001.patch
>
>
> I have written a shell script to find license errors in hadoop, mapreduce, 
> yarn and hdfs. An error still remains!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11905) Fix license header inconsistency in hdfs

2017-06-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034773#comment-16034773
 ] 

Brahma Reddy Battula commented on HDFS-11905:
-

Following is the {{branch-2}} jenkins pre-commit build. will commit branch-2 
patch also.



| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Timed out junit tests | 

[jira] [Commented] (HDFS-5042) Completed files lost after power failure

2017-06-02 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034757#comment-16034757
 ] 

Dave Latham commented on HDFS-5042:
---

+1, hear hear!

Thanks [~vinayrpet] for driving this in after 4 years, and [~andrew.wang] for 
the pointer to the possible solution.

> Completed files lost after power failure
> 
>
> Key: HDFS-5042
> URL: https://issues.apache.org/jira/browse/HDFS-5042
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5)
>Reporter: Dave Latham
>Assignee: Vinayakumar B
>Priority: Critical
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, 
> HDFS-5042-03.patch, HDFS-5042-04.patch, HDFS-5042-05-branch-2.patch, 
> HDFS-5042-05.patch, HDFS-5042-branch-2-01.patch, HDFS-5042-branch-2-05.patch, 
> HDFS-5042-branch-2.7-05.patch, HDFS-5042-branch-2.7-06.patch, 
> HDFS-5042-branch-2.8-05.patch, HDFS-5042-branch-2.8-06.patch
>
>
> We suffered a cluster wide power failure after which HDFS lost data that it 
> had acknowledged as closed and complete.
> The client was HBase which compacted a set of HFiles into a new HFile, then 
> after closing the file successfully, deleted the previous versions of the 
> file.  The cluster then lost power, and when brought back up the newly 
> created file was marked CORRUPT.
> Based on reading the logs it looks like the replicas were created by the 
> DataNodes in the 'blocksBeingWritten' directory.  Then when the file was 
> closed they were moved to the 'current' directory.  After the power cycle 
> those replicas were again in the blocksBeingWritten directory of the 
> underlying file system (ext3).  When those DataNodes reported in to the 
> NameNode it deleted those replicas and lost the file.
> Some possible fixes could be having the DataNode fsync the directory(s) after 
> moving the block from blocksBeingWritten to current to ensure the rename is 
> durable or having the NameNode accept replicas from blocksBeingWritten under 
> certain circumstances.
> Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode):
> {noformat}
> RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: 
> Creating 
> file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  with permission=rwxrwxrwx
> NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c.
>  blk_1395839728632046111_357084589
> DN 2013-06-29 11:16:06,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block 
> blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: 
> /10.0.5.237:50010
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Received block 
> blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block 
> blk_1395839728632046111_357084589 terminating
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing 
> lease on  file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  from client DFSClient_hb_rs_hs745,60020,1372470111932
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  is closed by DFSClient_hb_rs_hs745,60020,1372470111932
> RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Renaming compacted file at 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  to 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c
> RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Completed major compaction of 7 file(s) in n of 
> users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into 
> 

[jira] [Commented] (HDFS-11887) XceiverClientManager should close XceiverClient on eviction from cache

2017-06-02 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034753#comment-16034753
 ] 

Mukul Kumar Singh commented on HDFS-11887:
--

Thanks for the review [~vagarychen] and [~cheersyang]
I have incorporated your review comments, I have added the notion of refcount 
in the code.

Also I have added a test for eviction under various scenarios.
Please have a look and let me know of your comments.

> XceiverClientManager should close XceiverClient on eviction from cache
> --
>
> Key: HDFS-11887
> URL: https://issues.apache.org/jira/browse/HDFS-11887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11887-HDFS-7240.001.patch, 
> HDFS-11887-HDFS-7240.002.patch
>
>
> XceiverClientManager doesn't close client on eviction which can leak 
> resources.
> {code}
> public XceiverClientManager(Configuration conf) {
> .
> .
> .
> public void onRemoval(
> RemovalNotification
>   removalNotification) {
>   // If the reference count is not 0, this xceiver client should 
> not
>   // be evicted, add it back to the cache.
>   WithAccessInfo info = removalNotification.getValue();
>   if (info.hasRefence()) {
> synchronized (XceiverClientManager.this.openClient) {
>   XceiverClientManager.this
>   .openClient.put(removalNotification.getKey(), info);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11887) XceiverClientManager should close XceiverClient on eviction from cache

2017-06-02 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11887:
-
Attachment: HDFS-11887-HDFS-7240.002.patch

> XceiverClientManager should close XceiverClient on eviction from cache
> --
>
> Key: HDFS-11887
> URL: https://issues.apache.org/jira/browse/HDFS-11887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11887-HDFS-7240.001.patch, 
> HDFS-11887-HDFS-7240.002.patch
>
>
> XceiverClientManager doesn't close client on eviction which can leak 
> resources.
> {code}
> public XceiverClientManager(Configuration conf) {
> .
> .
> .
> public void onRemoval(
> RemovalNotification
>   removalNotification) {
>   // If the reference count is not 0, this xceiver client should 
> not
>   // be evicted, add it back to the cache.
>   WithAccessInfo info = removalNotification.getValue();
>   if (info.hasRefence()) {
> synchronized (XceiverClientManager.this.openClient) {
>   XceiverClientManager.this
>   .openClient.put(removalNotification.getKey(), info);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10816) TestComputeInvalidateWork#testDatanodeReRegistration fails due to race between test and replication monitor

2017-06-02 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034741#comment-16034741
 ] 

Eric Badger commented on HDFS-10816:


Not sure why hadoopqa isn't running on the latest patches. [~kihwal], can you 
kick the hadoopqa bot?

> TestComputeInvalidateWork#testDatanodeReRegistration fails due to race 
> between test and replication monitor
> ---
>
> Key: HDFS-10816
> URL: https://issues.apache.org/jira/browse/HDFS-10816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-10816.001.patch, HDFS-10816.002.patch, 
> HDFS-10816-branch-2.002.patch
>
>
> {noformat}
> java.lang.AssertionError: Expected invalidate blocks to be the number of DNs 
> expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork.testDatanodeReRegistration(TestComputeInvalidateWork.java:160)
> {noformat}
> The test fails because of a race condition between the test and the 
> replication monitor. The default replication monitor interval is 3 seconds, 
> which is just about how long the test normally takes to run. The test deletes 
> a file and then subsequently gets the namesystem writelock. However, if the 
> replication monitor fires in between those two instructions, the test will 
> fail as it will itself invalidate one of the blocks. This can be easily 
> reproduced by removing the sleep() in the ReplicationMonitor's run() method 
> in BlockManager.java, so that the replication monitor executes as quickly as 
> possible and exacerbates the race. 
> To fix the test all that needs to be done is to turn off the replication 
> monitor. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11918) Ozone: Encapsulate KSM metadata key into protobuf messages for better (de)serialization

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034593#comment-16034593
 ] 

Hadoop QA commented on HDFS-11918:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11918 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870956/HDFS-11918-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 74a89e5c2c48 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5cdd880 |
| Default Java | 1.8.0_131 |
| findbugs | 

[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034557#comment-16034557
 ] 

Hadoop QA commented on HDFS-11916:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870953/HDFS-11916.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux abca46b1c4f1 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d9084e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19746/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19746/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19746/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19746/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 

[jira] [Commented] (HDFS-11781) Ozone: KSM: Add deleteKey

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034539#comment-16034539
 ] 

Hadoop QA commented on HDFS-11781:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 10 new 
+ 3 unchanged - 0 fixed = 13 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestCBlockServer |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870944/HDFS-11781-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 

[jira] [Commented] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034472#comment-16034472
 ] 

Hadoop QA commented on HDFS-11914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870933/HDFS-11914.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6b582713e0b6 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d9084e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19743/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19743/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19743/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19743/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more diagnosis info for fsimage transfer failure.
> 

[jira] [Resolved] (HDFS-11917) Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block

2017-06-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-11917.

Resolution: Not A Problem
  Assignee: Weiwei Yang

> Why when using the hdfs nfs gateway, a file which is smaller than one block 
> size required a block
> -
>
> Key: HDFS-11917
> URL: https://issues.apache.org/jira/browse/HDFS-11917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.8.0
>Reporter: BINGHUI WANG
>Assignee: Weiwei Yang
>
> I use the linux shell to put the file into the hdfs throuth the hdfs nfs 
> gateway. I found that if the file which size is smaller than one block(128M), 
> it will still takes one block(128M) of hdfs storage by this way. But after a 
> few minitues the excess storage will be released.
> e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it 
> will takes one block(128M) at first. After a few minitues the excess 
> storage(68M) will
> be released. The file only use 60M hdfs storage at last.
> Why is will be this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11917) Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block

2017-06-02 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034455#comment-16034455
 ] 

Weiwei Yang commented on HDFS-11917:


Hi [~fireling]

This is not a bug, this is how HDFS works. A file (if less than block size) is 
always stored in a block but that doesn't mean this file will take a block size 
from the system. Datanode has a background thread to calculate the disk usage 
and report that back to NN in certain interval, which is defined by 
"fs.du.interval". So it needs a while until NN acknowledges the actual space 
utilized. I am closing this as INVALID. Next time, you can try raise your 
question in user mailing list before filing a jira. Feel free to reopen if you 
disagree. Thank you.

> Why when using the hdfs nfs gateway, a file which is smaller than one block 
> size required a block
> -
>
> Key: HDFS-11917
> URL: https://issues.apache.org/jira/browse/HDFS-11917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.8.0
>Reporter: BINGHUI WANG
>
> I use the linux shell to put the file into the hdfs throuth the hdfs nfs 
> gateway. I found that if the file which size is smaller than one block(128M), 
> it will still takes one block(128M) of hdfs storage by this way. But after a 
> few minitues the excess storage will be released.
> e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it 
> will takes one block(128M) at first. After a few minitues the excess 
> storage(68M) will
> be released. The file only use 60M hdfs storage at last.
> Why is will be this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11918) Ozone: Encapsulate KSM metadata key into protobuf messages for better (de)serialization

2017-06-02 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034433#comment-16034433
 ] 

Weiwei Yang commented on HDFS-11918:


Attached an initial patch for this work. Please take a look how it is used in 
{{TestKeyProto#testKeyProto}}. With this approach, if we want to parse a byte 
array from KSM database, we can simply call

{code}
KsmKey key = KeyProtoUtils.getKsmKeyProto(bytes);
{code}

Proto class {{KsmKey}} has convenient functions to get volume/bucket/key/user 
names, we can get rid of parsing strings which is pretty annoying. [~xyao], 
[~anu], please let me know if you like this idea and if you have any 
suggestions. Thank you.

> Ozone: Encapsulate KSM metadata key into protobuf messages for better 
> (de)serialization
> ---
>
> Key: HDFS-11918
> URL: https://issues.apache.org/jira/browse/HDFS-11918
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11918-HDFS-7240.001.patch
>
>
> There are multiple type of keys stored in KSM database
> # Volume Key
> # Bucket Key
> # Object Key
> # User Key
> Currently they are represented as plain string with some conventions, such as
> # /volume
> # /volume/bucket
> # /volume/bucket/key
> # $user
> this approach makes it so difficult to parse volume/bucket/keys from KSM 
> database. Propose to encapsulate these types of keys into protobuf messages, 
> and take advantage of protobuf to serialize(deserialize) classes to byte 
> arrays (and vice versa).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11918) Ozone: Encapsulate KSM metadata key into protobuf messages for better (de)serialization

2017-06-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11918:
---
Status: Patch Available  (was: Open)

> Ozone: Encapsulate KSM metadata key into protobuf messages for better 
> (de)serialization
> ---
>
> Key: HDFS-11918
> URL: https://issues.apache.org/jira/browse/HDFS-11918
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11918-HDFS-7240.001.patch
>
>
> There are multiple type of keys stored in KSM database
> # Volume Key
> # Bucket Key
> # Object Key
> # User Key
> Currently they are represented as plain string with some conventions, such as
> # /volume
> # /volume/bucket
> # /volume/bucket/key
> # $user
> this approach makes it so difficult to parse volume/bucket/keys from KSM 
> database. Propose to encapsulate these types of keys into protobuf messages, 
> and take advantage of protobuf to serialize(deserialize) classes to byte 
> arrays (and vice versa).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11918) Ozone: Encapsulate KSM metadata key into protobuf messages for better (de)serialization

2017-06-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11918:
---
Attachment: HDFS-11918-HDFS-7240.001.patch

> Ozone: Encapsulate KSM metadata key into protobuf messages for better 
> (de)serialization
> ---
>
> Key: HDFS-11918
> URL: https://issues.apache.org/jira/browse/HDFS-11918
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11918-HDFS-7240.001.patch
>
>
> There are multiple type of keys stored in KSM database
> # Volume Key
> # Bucket Key
> # Object Key
> # User Key
> Currently they are represented as plain string with some conventions, such as
> # /volume
> # /volume/bucket
> # /volume/bucket/key
> # $user
> this approach makes it so difficult to parse volume/bucket/keys from KSM 
> database. Propose to encapsulate these types of keys into protobuf messages, 
> and take advantage of protobuf to serialize(deserialize) classes to byte 
> arrays (and vice versa).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >