[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-82:
---
Status: Patch Available  (was: Open)

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-82.001.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-82:
--

Assignee: Bharat Viswanadham

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-82.001.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-16 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-82:
--

 Summary: Merge ContainerData and ContainerStatus classes
 Key: HDDS-82
 URL: https://issues.apache.org/jira/browse/HDDS-82
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
 Attachments: HDDS-82.001.patch

According to refactoring of containerIO, ContainerData has common fields for 
different kinds of containerTypes, and each Container will extend ConatinerData 
to add its fields. So, for this merging ContainerStatus fields to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-82:
---
Attachment: HDDS-82.001.patch

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-82.001.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478544#comment-16478544
 ] 

genericqa commented on HDFS-13558:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923840/HDFS-13558.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b535f73876ee 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24238/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24238/testReport/ |
| Max. process+thread count | 2938 (vs. ulimit 

[jira] [Updated] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-16 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13578:

Attachment: HDFS-13578-HDFS-12943.000.patch

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478525#comment-16478525
 ] 

Xiao Chen commented on HDFS-13281:
--

I understand the no edek part. The argument is the test should verify the 
encrypted bytes written are still encrypted bytes.
bq. If there is no edek, then how will client decrypt ?
client does not. The test verifies that the read bytes are the same, like 
you're doing now. Only the bytes being verifies should be encrypted bytes, 
because as explained there should be no circumstance you'd want to write clear 
text to it. 

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-16 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13578:
---

 Summary: Add ReadOnly annotation to methods in ClientProtocol
 Key: HDFS-13578
 URL: https://issues.apache.org/jira/browse/HDFS-13578
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun
Assignee: Chao Sun


For those read-only methods in {{ClientProtocol}}, we may want to use a 
{{@ReadOnly}} annotation to mark them, and then check in the proxy provider for 
observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13577) RBF: Failed mount point operations, returns wrong exit code.

2018-05-16 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HDFS-13577:


 Summary: RBF: Failed mount point operations, returns wrong exit 
code.
 Key: HDFS-13577
 URL: https://issues.apache.org/jira/browse/HDFS-13577
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Y. SREENIVASULU REDDY


If client is performed add mount point with some special character, mount point 
add is failed.

And prints the message like
{noformat}
18/05/17 09:58:34 DEBUG ipc.ProtobufRpcEngine: Call: addMountTableEntry took 
19ms Cannot add mount point /testSpecialCharMountPointCreation/test/
{noformat}
In the above case it should return the exist code is non zero value.
{code:java|title=RouterAdmin.java|borderStyle=solid}
Exception debugException = null;
exitCode = 0;
try {
if ("-add".equals(cmd)) {
if (addMount(argv, i)) {
System.out.println("Successfully added mount point " + argv[i]);
}
{code}
we should handle this kind of cases also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13576) RBF: Add destination path length validation for add/update mount entry

2018-05-16 Thread Dibyendu Karmakar (JIRA)
Dibyendu Karmakar created HDFS-13576:


 Summary: RBF: Add destination path length validation for 
add/update mount entry
 Key: HDFS-13576
 URL: https://issues.apache.org/jira/browse/HDFS-13576
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Dibyendu Karmakar


Currently there is no validation to check destination path length while adding 
or updating mount entry. But while trying to create directory using this mount 
entry 
RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException)
 is thrown with exception message as "maximum path component name limit of 
... directory / is exceeded: limit=255 length=1817"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-16 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478432#comment-16478432
 ] 

Chris Douglas commented on HDFS-12615:
--

bq. Any Jira which has not been released, we should certainly revert. 
A blanket veto over a clerical error is not on the table. A set of unrelated 
features and fixes were unhelpfully linked from a single JIRA. Let's clean it 
up and move significant features- like security- to design doc + branch, 
leaving bug fixes and the like to normal dev on trunk, as we would with any 
other module. Particularly for code that only affects RBF, isolating it on a 
branch doesn't do anything for the stability of trunk.

bq. I don't quite agree with the patch count logic, for I have seen patches 
with are more than 200 KB at times. Let us just say "we know a big feature when 
we see it"
I was referring to the 5/9 patches implementing these features. Legalese 
defining when a branch is required is not a good use of anyone's time. Let's 
not belabor it. However, you requested _all_ development move to a branch, 
based on an impression that you explicitly have not verified. Without seeking 
objective criteria that apply to all situations, you need to research this 
particular patchset to ground your claims about _it_.

Let's be concrete. You have the impression that (a) changes committed to trunk 
affect non-RBF deployments and (b) RBF features miss critical cases. Those are 
demonstrable.

bq. I *feel* that a JIRA that has these following features [...] *Sounds like* 
a major undertaking in my mind and *feels like* these do need a branch and 
these are significant features.
RBF has a significant _roadmap_. A flat list of tasks- with mixed granularity- 
is a poor way to organize it.

bq. [...] since they are not very keen on communicating details, I am proposing 
that you move all this work to a branch and bring it back when the whole idea 
is baked
Not everyone participating in RBF development has worked in OSS projects, 
before. It's fine to explore ideas in code, collaboratively, in JIRA. Failing 
to signal which JIRAs are variations on a theme (e.g., protobuf boilerplate), 
prototypes, or features affecting non-RBF: that's not OK. Reviewers can't 
follow every JIRA, they need help finding the relevant signal.

Your confidence that people working on RBF are applying reasonable filters and 
*soliciting* others' opinion is extremely important. From a random sampling, 
that seems to be happening. Reviewing the code in issues [~arpitagarwal] 
cited... they may "look non-trivial at a glance", but after a slightly longer 
glance, they look pretty straightforward to me. Or at least, they follow from 
the design of the router.

bq. Perhaps there is a communication problem here. I am not sure where your 
assumption comes from; reading the comments on the security patch, I am not 
able to come to that conclusion. Please take a look at HDFS-12284. [...] If we 
both are reading the same comments, I am at a loss on how you came to the 
conclusion that it was a proposal and not a patch.
Committing a patch that claims to add security, without asking for broader 
review, would be absurd. Reading a discussion about a lack of clarity on 
_delegation tokens_ and concluding that group believes its implementation is 
ready to merge... that requires more assumptions about those developers' intent 
and competence than to conclude "prototype". _However_ if it were being 
developed in a branch, that signal would be unambiguous.

bq. It will benefit the RBF feature as well as future maintainers to have a 
design notes or a detailed change description beyond a 1-line summary because 
most are large patches
>From the samples I read, this is a recurring problem. Many RBF JIRAs should 
>have more prose around the design tradeoffs, not just comments on the code. 
>Taking HDFS-13224 as an example, one would need to read the patch to 
>understand what was implemented, and how. Again, most of these follow from the 
>RBF design and the larger patch size often comes from PB boilerplate, but 
>raising the salient details both for review and for future maintainers (I'm 
>glad you brought this up) is not optional.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478424#comment-16478424
 ] 

genericqa commented on HDDS-10:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
11s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 15 new + 0 unchanged - 0 fixed = 15 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 62 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 12 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-10 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923831/HDDS-10-HDDS-4.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  shellcheck  shelldocs  |
| uname | Linux eda4f83d7c04 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 938baa2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDDS-Build/126/artifact/out/diff-patch-shellcheck.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/126/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/126/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/126/testReport/ |
| asflicense | 

[jira] [Commented] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478419#comment-16478419
 ] 

genericqa commented on HDDS-7:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
45s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 22s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478411#comment-16478411
 ] 

genericqa commented on HDFS-13570:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13570 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923812/HDFS-13570.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 65f08f202322 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / be53969 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24235/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24235/testReport/ |
| Max. process+thread count | 3394 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-10803) TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available

2018-05-16 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478408#comment-16478408
 ] 

Yiqun Lin commented on HDFS-10803:
--

[~shahrs87], feel free to backport to branch-2.8, :).

> TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails 
> intermittently due to no free space available
> 
>
> Key: HDFS-10803
> URL: https://issues.apache.org/jira/browse/HDFS-10803
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HDFS-10803.001.patch
>
>
> The test {{TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools}} 
> fails intermittently. The stack 
> infos(https://builds.apache.org/job/PreCommit-HDFS-Build/16534/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithMultipleNameNodes/testBalancing2OutOf3Blockpools/):
> {code}
> java.io.IOException: Creating block, no free space available
>   at 
> org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.(SimulatedFSDataset.java:151)
>   at 
> org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset.injectBlocks(SimulatedFSDataset.java:580)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.injectBlocks(MiniDFSCluster.java:2679)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.unevenDistribution(TestBalancerWithMultipleNameNodes.java:405)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancing2OutOf3Blockpools(TestBalancerWithMultipleNameNodes.java:516)
> {code}
> The error message means that the datanode's capacity has used up and there is 
> no other space to create a new file block. 
> I looked into the code, I found the main reason seemed that the 
> {{capacities}}  for cluster is not correctly constructed in the second 
> cluster startup before preparing to redistribute blocks in test.
> The related code:
> {code}
>   // Here we do redistribute blocks nNameNodes times for each node,
>   // we need to adjust the capacities. Otherwise it will cause the no 
>   // free space errors sometimes.
>   final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
>   .nnTopology(MiniDFSNNTopology.simpleFederatedTopology(nNameNodes))
>   .numDataNodes(nDataNodes)
>   .racks(racks)
>   .simulatedCapacities(newCapacities)
>   .format(false)
>   .build();
>   LOG.info("UNEVEN 11");
> ...
> for(int n = 0; n < nNameNodes; n++) {
>   // redistribute blocks
>   final Block[][] blocksDN = TestBalancer.distributeBlocks(
>   blocks[n], s.replication, distributionPerNN);
> 
>   for(int d = 0; d < blocksDN.length; d++)
> cluster.injectBlocks(n, d, Arrays.asList(blocksDN[d]));
>   LOG.info("UNEVEN 13: n=" + n);
> }
> {code}
> And that means the totalUsed value has been increased as 
> {{nNameNodes*usedSpacePerNN}} rather than {{usedSpacePerNN}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478402#comment-16478402
 ] 

Hudson commented on HDFS-13559:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14214 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14214/])
HDFS-13559. TestBlockScanner does not close TestContext properly. (inigoiri: 
rev 454de3b543c8b00a9ab566c7d1c64d7e4cffee0f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java


> TestBlockScanner does not close TestContext properly
> 
>
> Key: HDFS-13559
> URL: https://issues.apache.org/jira/browse/HDFS-13559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13559.000.patch, HDFS-13559.001.patch
>
>
> Without closing ctx in testMarkSuspectBlock, testIgnoreMisplacedBlock, 
> testAppendWhileScanning, some tests fail on Windows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0, 
> Time elapsed: 113.398 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] 
> testScanAllBlocksWithRescan(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner)
>  Time elapsed: 0.031 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:102){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksImpl(TestBlockScanner.java:366){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksWithRescan(TestBlockScanner.java:435){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestBlockScanner.testAppendWhileScanning:899 ╗ IO 
> Could not fully delete E:\OS...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testCorruptBlockHandling:488 ╗ IO 
> Could not fully delete E:\O...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testDatanodeCursor:531 ╗ IO Could not 
> fully delete E:\OSS\had...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testMarkSuspectBlock:717 ╗ IO Could 
> not fully delete E:\OSS\h...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testScanAllBlocksWithRescan:435->testScanAllBlocksImpl:366 ╗ 
> IO{color}
> {color:#d04437}[ERROR] TestBlockScanner.testScanRateLimit:450 ╗ IO Could not 
> fully delete E:\OSS\hado...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithCaching:261->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithoutCaching:256->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13559:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~huanbang1993] for the fix.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> TestBlockScanner does not close TestContext properly
> 
>
> Key: HDFS-13559
> URL: https://issues.apache.org/jira/browse/HDFS-13559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13559.000.patch, HDFS-13559.001.patch
>
>
> Without closing ctx in testMarkSuspectBlock, testIgnoreMisplacedBlock, 
> testAppendWhileScanning, some tests fail on Windows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0, 
> Time elapsed: 113.398 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] 
> testScanAllBlocksWithRescan(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner)
>  Time elapsed: 0.031 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:102){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksImpl(TestBlockScanner.java:366){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksWithRescan(TestBlockScanner.java:435){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestBlockScanner.testAppendWhileScanning:899 ╗ IO 
> Could not fully delete E:\OS...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testCorruptBlockHandling:488 ╗ IO 
> Could not fully delete E:\O...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testDatanodeCursor:531 ╗ IO Could not 
> fully delete E:\OSS\had...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testMarkSuspectBlock:717 ╗ IO Could 
> not fully delete E:\OSS\h...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testScanAllBlocksWithRescan:435->testScanAllBlocksImpl:366 ╗ 
> IO{color}
> {color:#d04437}[ERROR] TestBlockScanner.testScanRateLimit:450 ╗ IO Could not 
> fully delete E:\OSS\hado...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithCaching:261->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithoutCaching:256->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

[jira] [Commented] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478399#comment-16478399
 ] 

Anbang Hu commented on HDFS-13558:
--

Thanks [~elgoiri] for bringing up a good point. cluster is set to null in 
[^HDFS-13558.003.patch]

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch, HDFS-13558.003.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> 

[jira] [Updated] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13558:
-
Attachment: HDFS-13558.003.patch

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch, HDFS-13558.003.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478395#comment-16478395
 ] 

Íñigo Goiri commented on HDFS-13559:


[^HDFS-13559.001.patch] LGTM.
The tests run correctly 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24234/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockScanner/].
+1
Committing.

> TestBlockScanner does not close TestContext properly
> 
>
> Key: HDFS-13559
> URL: https://issues.apache.org/jira/browse/HDFS-13559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13559.000.patch, HDFS-13559.001.patch
>
>
> Without closing ctx in testMarkSuspectBlock, testIgnoreMisplacedBlock, 
> testAppendWhileScanning, some tests fail on Windows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0, 
> Time elapsed: 113.398 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] 
> testScanAllBlocksWithRescan(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner)
>  Time elapsed: 0.031 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:102){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksImpl(TestBlockScanner.java:366){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksWithRescan(TestBlockScanner.java:435){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestBlockScanner.testAppendWhileScanning:899 ╗ IO 
> Could not fully delete E:\OS...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testCorruptBlockHandling:488 ╗ IO 
> Could not fully delete E:\O...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testDatanodeCursor:531 ╗ IO Could not 
> fully delete E:\OSS\had...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testMarkSuspectBlock:717 ╗ IO Could 
> not fully delete E:\OSS\h...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testScanAllBlocksWithRescan:435->testScanAllBlocksImpl:366 ╗ 
> IO{color}
> {color:#d04437}[ERROR] TestBlockScanner.testScanRateLimit:450 ╗ IO Could not 
> fully delete E:\OSS\hado...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithCaching:261->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithoutCaching:256->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478392#comment-16478392
 ] 

Íñigo Goiri commented on HDFS-13558:


[^HDFS-13558.002.patch] looks good and the unit tests are unrelated.
I would set the cluster to null after shutting it down just in case.

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478384#comment-16478384
 ] 

genericqa commented on HDFS-13558:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestDataTransferKeepalive |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923811/HDFS-13558.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b2f2126d8745 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / be53969 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24236/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24236/testReport/ |
| Max. process+thread count | 3559 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24236/console |
| Powered by | Apache 

[jira] [Commented] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478383#comment-16478383
 ] 

genericqa commented on HDFS-13559:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 28s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13559 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923804/HDFS-13559.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8fc624b3fed9 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / be53969 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24234/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24234/testReport/ |
| Max. process+thread count | 3868 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478376#comment-16478376
 ] 

Íñigo Goiri commented on HDFS-13563:


OK, can you post a patch with the needed changes and then based on that we see 
what to do?

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13563.000.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-05-16 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478373#comment-16478373
 ] 

Ajay Kumar commented on HDDS-10:


[~xyao] ya, ozone-default.xml are also unintended. 

[~elek] last patch didn't had binary files for building kdc image. With binary 
files patch is much bigger and looks messy atleast to me. We had discussion 
with [~xyao] and [~anu] on unsecure kdc part. If we don't open any port of kdc 
container than it will be accessible from within docker containers so should 
suffice security requirements. If we agree on this we can remove kdc Dockerfile 
and other related files and build it from an image.

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) docker changes to test secure ozone cluster

2018-05-16 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-10:
---
Attachment: HDDS-10-HDDS-4.02.patch

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478366#comment-16478366
 ] 

Anbang Hu commented on HDFS-13563:
--

[~elgoiri], most of them were due to slow DNS resolution on Windows. The 
remaining 4 should have some other reasons.

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13563.000.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-16 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-7:
--
Attachment: HDDS-7-HDDS-4.01.patch

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-7-poc.patch, HDDS-7-HDDS-4.00.patch, 
> HDDS-7-HDDS-4.01.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478363#comment-16478363
 ] 

Anbang Hu commented on HDFS-13560:
--

These failures might be related to missing Long.MAX_VALUE. Updated patch with 
[^HDFS-13560.003.patch].

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch, HDFS-13560.003.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> 

[jira] [Updated] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13560:
-
Attachment: HDFS-13560.003.patch

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch, HDFS-13560.003.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocksAreSaved,TestDirectoryScanner#testDeleteBlockOnTransientStorage,TestDirectoryScanner#testRetainBlockOnPersistentStorage,TestDirectoryScanner#testExceptionHandlingWhileDirectoryScan,TestDirectoryScanner#testDirectoryScanner,TestDirectoryScanner#testThrottling,TestDirectoryScanner#testDirectoryScannerInFederatedCluster,TestNameNodeMXBean#testNameNodeMXBeanInfo{code}
> {color:#d04437}[ERROR] 

[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-16 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-7:
--
Attachment: (was: HDDS-7-HDDS-4.01.patch)

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-7-poc.patch, HDDS-7-HDDS-4.00.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-16 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-7:
--
Attachment: HDDS-7-HDDS-4.01.patch

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-7-poc.patch, HDDS-7-HDDS-4.00.patch, 
> HDDS-7-HDDS-4.01.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478350#comment-16478350
 ] 

Anbang Hu commented on HDFS-13560:
--

hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting doesn't seem to 
be related. However, hadoop.hdfs.server.datanode.TestDirectoryScanner might be 
related.

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> 

[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478347#comment-16478347
 ] 

Íñigo Goiri commented on HDFS-13560:


The following failed unit tests seem suspicious:
* hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
* hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
* hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting

Not sure what the issue is but it's definitely related to this patch.

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> 

[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478346#comment-16478346
 ] 

Anbang Hu commented on HDFS-13563:
--

Modify the description after 
[this|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265].

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13563.000.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13563:
-
Description: 
{color:#33}[Daily Windows 
build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 3 
milliseconds{color:#33}"{color}
{code:java}
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
{code}
 

 

 

  was:
{color:#33}Daily Windows build shows TestDFSAdminWithHA has 4 timeout 
tests. Local run confirms them:{color}

{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA{color}
 {color:#d04437}[ERROR] Tests run: 45, Failures: 0, Errors: 45, Skipped: 0, 
Time elapsed: 448.371 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA{color}
 {color:#d04437}[ERROR] 
testRestoreFailedStorageNN1UpNN2Down(org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA)
 Time elapsed: 30.01 s <<< ERROR!{color}
 {color:#d04437}java.lang.Exception: test timed out after 3 
milliseconds{color}
 {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
Method){color}
 {color:#d04437} at 
java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
 {color:#d04437} at 
java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
 {color:#d04437} at 
java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
 {color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
 {color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:224){color}
 {color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:179){color}
 {color:#d04437} at 
org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:90){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:521){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:511){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.(HttpServer2.java:400){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:336){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:162){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:889){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:725){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1215){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1090){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:114){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:40){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:85){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRestoreFailedStorageNN1UpNN2Down(TestDFSAdminWithHA.java:235){color}
 {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
 {color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
 {color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
 {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
 {color:#d04437} at 

[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478345#comment-16478345
 ] 

Íñigo Goiri commented on HDFS-13563:


Is this the same issue as the DNS resolution extra time?

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13563.000.patch
>
>
> {color:#33}Daily Windows build shows TestDFSAdminWithHA has 4 timeout 
> tests. Local run confirms them:{color}
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA{color}
>  {color:#d04437}[ERROR] Tests run: 45, Failures: 0, Errors: 45, Skipped: 0, 
> Time elapsed: 448.371 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA{color}
>  {color:#d04437}[ERROR] 
> testRestoreFailedStorageNN1UpNN2Down(org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA)
>  Time elapsed: 30.01 s <<< ERROR!{color}
>  {color:#d04437}java.lang.Exception: test timed out after 3 
> milliseconds{color}
>  {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
>  {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
>  {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
>  {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
>  {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
>  {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:224){color}
>  {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:179){color}
>  {color:#d04437} at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:90){color}
>  {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:521){color}
>  {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:511){color}
>  {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.(HttpServer2.java:400){color}
>  {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115){color}
>  {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:336){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:162){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:889){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:725){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1215){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1090){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:114){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:40){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:85){color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRestoreFailedStorageNN1UpNN2Down(TestDFSAdminWithHA.java:235){color}
>  {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
>  {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
>  {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
>  {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
>  {color:#d04437} at 
> 

[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478341#comment-16478341
 ] 

Anu Engineer commented on HDFS-12615:
-

{quote}Given that we discarded reverting, I'm not sure a branch makes sense for 
what is left in this JIRA.
{quote}
Just to make sure that we are all on the same ,  what I proposed was a 
compromise based on what [~linyiqun] mentioned. Any Jira which has not been 
released, we *should* certainly revert. 

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13563:
-
Description: 
{color:#33}Daily Windows build shows TestDFSAdminWithHA has 4 timeout 
tests. Local run confirms them:{color}

{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA{color}
 {color:#d04437}[ERROR] Tests run: 45, Failures: 0, Errors: 45, Skipped: 0, 
Time elapsed: 448.371 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA{color}
 {color:#d04437}[ERROR] 
testRestoreFailedStorageNN1UpNN2Down(org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA)
 Time elapsed: 30.01 s <<< ERROR!{color}
 {color:#d04437}java.lang.Exception: test timed out after 3 
milliseconds{color}
 {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
Method){color}
 {color:#d04437} at 
java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
 {color:#d04437} at 
java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
 {color:#d04437} at 
java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
 {color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
 {color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:224){color}
 {color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:179){color}
 {color:#d04437} at 
org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:90){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:521){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:511){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.(HttpServer2.java:400){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115){color}
 {color:#d04437} at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:336){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:162){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:889){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:725){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1215){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1090){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:114){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:40){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:85){color}
 {color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRestoreFailedStorageNN1UpNN2Down(TestDFSAdminWithHA.java:235){color}
 {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
 {color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
 {color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
 {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
 {color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
 {color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
 {color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
 {color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
 {color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[ERROR] 
testListOpenFilesNN1UpNN2Down(org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA) 
Time elapsed: 10.063 s <<< ERROR!{color}
 

[jira] [Updated] (HDFS-13572) [umbrella] Non-blocking HDFS Access for H3

2018-05-16 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HDFS-13572:
-
Attachment: Nonblocking HDFS Access.pdf

> [umbrella] Non-blocking HDFS Access for H3
> --
>
> Key: HDFS-13572
> URL: https://issues.apache.org/jira/browse/HDFS-13572
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs async
>Affects Versions: 3.0.0
>Reporter: stack
>Priority: Major
> Attachments: Nonblocking HDFS Access.pdf
>
>
> An umbrella JIRA for supporting non-blocking HDFS access in h3.
> This issue has provenance in the stalled HDFS-9924 but would like to vault 
> over what was going on over there, in particular, focus on an async API for 
> hadoop3+ unencumbered by worries about how to make it work in hadoop2.
> Let me post a WIP design. Would love input/feedback (We make mention of the 
> HADOOP-12910 call for spec but as future work -- hopefully thats ok). Was 
> thinking of cutting a feature branch if all good after a bit of chat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478334#comment-16478334
 ] 

genericqa commented on HDFS-13560:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 38m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 38m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
43s{color} | {color:green} root: The patch generated 0 new + 120 unchanged - 1 
fixed = 120 total (was 121) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}274m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13560 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-13561) TestTransferFsImage#testGetImageTimeout times out intermittently on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13561:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

This is related to slow DNS resolution on Windows machines.
Please see [this 
|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.
Mark this JIRA as Won't Fix.

> TestTransferFsImage#testGetImageTimeout times out intermittently on Windows
> ---
>
> Key: HDFS-13561
> URL: https://issues.apache.org/jira/browse/HDFS-13561
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13561.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage{color}
> {color:#d04437}[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.695 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage{color}
> {color:#d04437}[ERROR] 
> testGetImageTimeout(org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage)
>  Time elapsed: 5.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 5000 
> milliseconds{color}
> {color:#d04437} at java.net.SocketInputStream.socketRead0(Native 
> Method){color}
> {color:#d04437} at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116){color}
> {color:#d04437} at 
> java.net.SocketInputStream.read(SocketInputStream.java:171){color}
> {color:#d04437} at 
> java.net.SocketInputStream.read(SocketInputStream.java:141){color}
> {color:#d04437} at 
> java.io.BufferedInputStream.fill(BufferedInputStream.java:246){color}
> {color:#d04437} at 
> java.io.BufferedInputStream.read1(BufferedInputStream.java:286){color}
> {color:#d04437} at 
> java.io.BufferedInputStream.read(BufferedInputStream.java:345){color}
> {color:#d04437} at 
> sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704){color}
> {color:#d04437} at 
> sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647){color}
> {color:#d04437} at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569){color}
> {color:#d04437} at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474){color}
> {color:#d04437} at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.doGetUrl(TransferFsImage.java:433){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:418){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage.testGetImageTimeout(TestTransferFsImage.java:133){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestTransferFsImage.testGetImageTimeout:133 ╗ test 
> timed out after 5000 milli...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13568) TestDataNodeUUID#testUUIDRegeneration times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13568:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

This is related to slow DNS resolution on Windows machines.
Please see [this 
|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.
Mark this JIRA as Won't Fix.

> TestDataNodeUUID#testUUIDRegeneration times out on Windows
> --
>
> Key: HDFS-13568
> URL: https://issues.apache.org/jira/browse/HDFS-13568
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13568.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID{color}
> {color:#d04437}[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 10.059 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID{color}
> {color:#d04437}[ERROR] 
> testUUIDRegeneration(org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID) 
> Time elapsed: 10.006 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID.testUUIDRegeneration(TestDataNodeUUID.java:90){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestDataNodeUUID.testUUIDRegeneration:90 ╗ test timed 
> out after 1 millise...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0{color}
>  
> {color:#33}[Windows daily 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeUUID/testUUIDRegeneration/]
>  also times out on this test.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Updated] (HDFS-13562) TestPipelinesFailover times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13562:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

The timeout is related to slow DNS resolution on Windows machines.

Please see [this 
|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.

Mark this JIRA as Won't Fix.

> TestPipelinesFailover times out on Windows
> --
>
> Key: HDFS-13562
> URL: https://issues.apache.org/jira/browse/HDFS-13562
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13562.000.patch
>
>
> testCompleteFileAfterCrashFailover times out, causing other tests to fail 
> because they cannot start cluster:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover{color}
> {color:#d04437}[ERROR] Tests run: 8, Failures: 0, Errors: 8, Skipped: 0, Time 
> elapsed: 30.813 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover{color}
> {color:#d04437}[ERROR] 
> testCompleteFileAfterCrashFailover(org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover)
>  Time elapsed: 30.009 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 3 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:224){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:179){color}
> {color:#d04437} at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:90){color}
> {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:521){color}
> {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:511){color}
> {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.(HttpServer2.java:400){color}
> {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115){color}
> {color:#d04437} at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:336){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:131){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:962){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1370){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.doWriteOverFailoverTest(TestPipelinesFailover.java:143){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testCompleteFileAfterCrashFailover(TestPipelinesFailover.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> 

[jira] [Updated] (HDFS-13555) TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13555:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

This is related to slow DNS resolution on Windows machines.

Please see [this 
|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.

Mark this JIRA as Won't Fix.

> TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on 
> Windows
> 
>
> Key: HDFS-13555
> URL: https://issues.apache.org/jira/browse/HDFS-13555
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13555.000.patch
>
>
> Although TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs has 
> 180s timeout, it is overridden by global timeout
> {code:java}
> @Rule
>  public Timeout testTimeout = new Timeout(3);{code}
> {color:#d04437}[INFO] Running org.apache.hadoop.net.TestNetworkTopology{color}
> {color:#d04437}[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 30.741 s <<< FAILURE! - in 
> org.apache.hadoop.net.TestNetworkTopology{color}
> {color:#d04437}[ERROR] 
> testInvalidNetworkTopologiesNotCachedInHdfs(org.apache.hadoop.net.TestNetworkTopology)
>  Time elapsed: 30.009 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 3 
> milliseconds{color}
> {color:#d04437} at java.lang.Object.wait(Native Method){color}
> {color:#d04437} at java.lang.Thread.join(Thread.java:1257){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestNetworkTopology>Object.wait:-2 ╗ test timed out 
> after 3 milliseconds{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-16 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478284#comment-16478284
 ] 

Rushabh S Shah commented on HDFS-13281:
---

bq. As far as this unit test is concerned, we should still write encrypted 
bytes, right? Then you verify that the read bytes are encrypted.
The whole point of this jira is namenode _will not_ create an edek if we write 
to {{/.reserved/raw}} path.
If there is no edek, then how will client decrypt ?

{quote}
My point is: even with the ability to write to raw, under no scenario should a 
user write cleartext data to /.reserved/raw since that defeats the purpose of 
encryption.
{quote}
I understand your point but it is difficult to incorporate in this unit test.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478281#comment-16478281
 ] 

Anbang Hu edited comment on HDFS-13552 at 5/17/18 12:21 AM:


As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See this for more details.

Mark this JIRA as Won't Fix.


was (Author: huanbang1993):
As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See this for more details.

Mark this JIRA as Won't Fix.

> TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead
>  time out on Windows
> ---
>
> Key: HDFS-13552
> URL: https://issues.apache.org/jira/browse/HDFS-13552
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13552.000.patch
>
>
> {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 20.073 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] 
> testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.005 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.001 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> 

[jira] [Comment Edited] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478281#comment-16478281
 ] 

Anbang Hu edited comment on HDFS-13552 at 5/17/18 12:21 AM:


As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See [this 
|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.

Mark this JIRA as Won't Fix.


was (Author: huanbang1993):
As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See this for more details.

Mark this JIRA as Won't Fix.

> TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead
>  time out on Windows
> ---
>
> Key: HDFS-13552
> URL: https://issues.apache.org/jira/browse/HDFS-13552
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13552.000.patch
>
>
> {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 20.073 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] 
> testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.005 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.001 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> 

[jira] [Updated] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13552:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See 
[this|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.

Mark this JIRA as Won't Fix.

> TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead
>  time out on Windows
> ---
>
> Key: HDFS-13552
> URL: https://issues.apache.org/jira/browse/HDFS-13552
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13552.000.patch
>
>
> {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 20.073 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] 
> testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.005 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.001 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> 

[jira] [Comment Edited] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478281#comment-16478281
 ] 

Anbang Hu edited comment on HDFS-13552 at 5/17/18 12:20 AM:


As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See this for more details.

Mark this JIRA as Won't Fix.


was (Author: huanbang1993):
As [~elgoiri] mentioned, this is a slow DNS resolution issue on Windows related 
to Windows machine setup.

See 
[this|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]for
 more details.

Mark this JIRA as Won't Fix.

> TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead
>  time out on Windows
> ---
>
> Key: HDFS-13552
> URL: https://issues.apache.org/jira/browse/HDFS-13552
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13552.000.patch
>
>
> {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 20.073 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] 
> testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.005 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.001 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HDFS-13569) TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA times out intermittently on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478272#comment-16478272
 ] 

Íñigo Goiri commented on HDFS-13569:


Thanks [~huanbang1993] for the resolution.
Given that this is a setup issue, I'm marking this as won't fix.
Others can use this JIRA for reference if they encounter similar issues.

> TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA times 
> out intermittently on Windows
> --
>
> Key: HDFS-13569
> URL: https://issues.apache.org/jira/browse/HDFS-13569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13569.000.patch
>
>
> On Windows, 
> TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA may 
> time out and cause subsequent tests to fail.
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations{color}
> {color:#d04437}[ERROR] Tests run: 6, Failures: 0, Errors: 5, Skipped: 0, Time 
> elapsed: 65.082 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations{color}
> {color:#d04437}[ERROR] 
> testClusterIdMismatchAtStartupWithHA(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>  Time elapsed: 20.003 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 2 
> milliseconds{color}
> {color:#d04437} at java.net.NetworkInterface.getAll(Native Method){color}
> {color:#d04437} at 
> java.net.NetworkInterface.getNetworkInterfaces(NetworkInterface.java:343){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.(TracerId.java:116){color}
> {color:#d04437} at 
> org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1215){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1090){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA(TestDataNodeMultipleRegistrations.java:248){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> test2NNRegistration(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>  Time elapsed: 0.029 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> 

[jira] [Updated] (HDFS-13569) TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA times out intermittently on Windows

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13569:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA times 
> out intermittently on Windows
> --
>
> Key: HDFS-13569
> URL: https://issues.apache.org/jira/browse/HDFS-13569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13569.000.patch
>
>
> On Windows, 
> TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA may 
> time out and cause subsequent tests to fail.
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations{color}
> {color:#d04437}[ERROR] Tests run: 6, Failures: 0, Errors: 5, Skipped: 0, Time 
> elapsed: 65.082 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations{color}
> {color:#d04437}[ERROR] 
> testClusterIdMismatchAtStartupWithHA(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>  Time elapsed: 20.003 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 2 
> milliseconds{color}
> {color:#d04437} at java.net.NetworkInterface.getAll(Native Method){color}
> {color:#d04437} at 
> java.net.NetworkInterface.getNetworkInterfaces(NetworkInterface.java:343){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.(TracerId.java:116){color}
> {color:#d04437} at 
> org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1215){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1090){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA(TestDataNodeMultipleRegistrations.java:248){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> test2NNRegistration(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>  Time elapsed: 0.029 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478269#comment-16478269
 ] 

Xiao Chen commented on HDFS-13281:
--

Aha, my memory faded with this

I was thinking about [this 
comment|https://issues.apache.org/jira/browse/HDFS-13281?focusedCommentId=16419803=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16419803]
 earlier. As far as this unit test is concerned, we should still write 
encrypted bytes, right? Then you verify that the read bytes are encrypted.
My point is: even with the ability to write to raw, under no scenario should a 
user write cleartext data to /.reserved/raw since that defeats the purpose of 
encryption.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13569) TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA times out intermittently on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478265#comment-16478265
 ] 

Anbang Hu commented on HDFS-13569:
--

Thank [~giovanni.fumarola] and [~elgoiri] for your kind review. Since this test 
class does not fail in daily Windows build. Let's keep it as it is. The reason 
for local timeout is that DNS resolution for fully qualified domain name on 
Windows takes a long time. The workaround is to add those IPs explicitly to 
hosts file. For example, run the following in Powershell as Admin before 
testing Hadoop will do the trick:
{code:java}
netsh interface ip show address | findstr "IP Address" | ForEach 
{$_.Split(":")[-1] + "    localhost"} > C:\Windows\System32\drivers\etc\hosts
{code}

> TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA times 
> out intermittently on Windows
> --
>
> Key: HDFS-13569
> URL: https://issues.apache.org/jira/browse/HDFS-13569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13569.000.patch
>
>
> On Windows, 
> TestDataNodeMultipleRegistrations#testClusterIdMismatchAtStartupWithHA may 
> time out and cause subsequent tests to fail.
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations{color}
> {color:#d04437}[ERROR] Tests run: 6, Failures: 0, Errors: 5, Skipped: 0, Time 
> elapsed: 65.082 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations{color}
> {color:#d04437}[ERROR] 
> testClusterIdMismatchAtStartupWithHA(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>  Time elapsed: 20.003 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 2 
> milliseconds{color}
> {color:#d04437} at java.net.NetworkInterface.getAll(Native Method){color}
> {color:#d04437} at 
> java.net.NetworkInterface.getNetworkInterfaces(NetworkInterface.java:343){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145){color}
> {color:#d04437} at 
> org.apache.htrace.core.TracerId.(TracerId.java:116){color}
> {color:#d04437} at 
> org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1215){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1090){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA(TestDataNodeMultipleRegistrations.java:248){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> test2NNRegistration(org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations)
>  Time elapsed: 0.029 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> 

[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-16 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478257#comment-16478257
 ] 

Rushabh S Shah commented on HDFS-13281:
---

Thanks [~xiaochen] for reviewing the latest patch.
bq. So here should we write 'encryptedBytes' to a reserved raw, and verify when 
reading it from p2,
The whole point of this jira is namenode shouldn't create an edek when client 
writes to {{/.reserved/raw}} path. Its the client's responsibility to setXAttr. 
Also note that {{setXAttr}} in test is out of scope of this jira.
So if we somehow write some encrypted bytes, then how would we decrypt in 
absence of an edek ?


{code}
   try {
 fs.getXAttr(reservedRawP2Path, HdfsServerConstants
 .CRYPTO_XATTR_FILE_ENCRYPTION_INFO);
 fail("getXAttr should have thrown an exception");
   } catch (IOException ioe) {
 assertExceptionContains("At least one of the attributes provided was " +
 "not found.", ioe);
   }
{code}
IMHO the above chunk of code is only required to test this jira which tests 
that there is no {{CRYPTO_XATTR_FILE_ENCRYPTION_INFO}} xattr on that path.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478247#comment-16478247
 ] 

Anbang Hu commented on HDFS-13570:
--

Thanks [~elgoiri]'s review. I removed the unused import in 
[^HDFS-13570.002.patch].

> TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows
> 
>
> Key: HDFS-13570
> URL: https://issues.apache.org/jira/browse/HDFS-13570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13570.000.patch, HDFS-13570.001.patch, 
> HDFS-13570.002.patch
>
>
> [Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> that the following 20 test cases fail on Windows with same error "Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  is not a valid DFS filename.":
> {code:java}
> TestQuotaByStorageType#testStorageSpaceQuotaWithRepFactor
> TestQuotaByStorageType#testStorageSpaceQuotaPerQuotaClear
> TestQuotaByStorageType#testStorageSpaceQuotaWithWarmPolicy
> TestQuota#testQuotaCommands
> TestQuota#testSetAndClearSpaceQuotaRegular
> TestQuota#testQuotaByStorageType
> TestQuota#testNamespaceCommands
> TestQuota#testSetAndClearSpaceQuotaByStorageType
> TestQuota#testMaxSpaceQuotas
> TestQuota#testSetAndClearSpaceQuotaNoAccess
> TestQuota#testSpaceQuotaExceptionOnAppend
> TestQuota#testSpaceCommands
> TestQuota#testBlockAllocationAdjustsUsageConservatively
> TestQuota#testMultipleFilesSmallerThanOneBlock
> TestQuota#testHugeFileCount
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetSpaceQuotaNegativeNumber
> TestQuota#testSpaceQuotaExceptionOnClose
> TestQuota#testSpaceQuotaExceptionOnFlush
> TestDFSOutputStream#testPreventOverflow{code}
> There are 2 test cases failing with error "It should be one line error 
> message like: clrSpaceQuota: Directory does not exist:  directory> expected:<1> but was:<2>"
> {code:java}
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetAndClearSpaceQuotaDirecotryNotExist
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13570:
-
Attachment: HDFS-13570.002.patch

> TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows
> 
>
> Key: HDFS-13570
> URL: https://issues.apache.org/jira/browse/HDFS-13570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13570.000.patch, HDFS-13570.001.patch, 
> HDFS-13570.002.patch
>
>
> [Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> that the following 20 test cases fail on Windows with same error "Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  is not a valid DFS filename.":
> {code:java}
> TestQuotaByStorageType#testStorageSpaceQuotaWithRepFactor
> TestQuotaByStorageType#testStorageSpaceQuotaPerQuotaClear
> TestQuotaByStorageType#testStorageSpaceQuotaWithWarmPolicy
> TestQuota#testQuotaCommands
> TestQuota#testSetAndClearSpaceQuotaRegular
> TestQuota#testQuotaByStorageType
> TestQuota#testNamespaceCommands
> TestQuota#testSetAndClearSpaceQuotaByStorageType
> TestQuota#testMaxSpaceQuotas
> TestQuota#testSetAndClearSpaceQuotaNoAccess
> TestQuota#testSpaceQuotaExceptionOnAppend
> TestQuota#testSpaceCommands
> TestQuota#testBlockAllocationAdjustsUsageConservatively
> TestQuota#testMultipleFilesSmallerThanOneBlock
> TestQuota#testHugeFileCount
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetSpaceQuotaNegativeNumber
> TestQuota#testSpaceQuotaExceptionOnClose
> TestQuota#testSpaceQuotaExceptionOnFlush
> TestDFSOutputStream#testPreventOverflow{code}
> There are 2 test cases failing with error "It should be one line error 
> message like: clrSpaceQuota: Directory does not exist:  directory> expected:<1> but was:<2>"
> {code:java}
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetAndClearSpaceQuotaDirecotryNotExist
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478245#comment-16478245
 ] 

Anbang Hu edited comment on HDFS-13558 at 5/16/18 11:40 PM:


Thanks [~elgoiri] for the suggestion. Uploaded a new version 
[^HDFS-13558.002.patch]


was (Author: huanbang1993):
Thanks [~elgoiri] for the suggestion. Uploaded a new version 
[^HDFS-13558.002.patch].

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> 

[jira] [Updated] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13558:
-
Attachment: HDFS-13558.002.patch

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}
> 

[jira] [Updated] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13558:
-
Attachment: (was: HDFS-13558.002.patch)

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}
> 

[jira] [Commented] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478245#comment-16478245
 ] 

Anbang Hu commented on HDFS-13558:
--

Thanks [~elgoiri] for the suggestion. Uploaded a new version 
[^HDFS-13558.002.patch].

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
> {color:#d04437} at 
> 

[jira] [Updated] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13558:
-
Attachment: HDFS-13558.002.patch

> TestDatanodeHttpXFrame does not shut down cluster
> -
>
> Key: HDFS-13558
> URL: https://issues.apache.org/jira/browse/HDFS-13558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13558.000.patch, HDFS-13558.001.patch, 
> HDFS-13558.002.patch
>
>
> On Windows, without shutting down cluster properly:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 32.32 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
> {color:#d04437}[ERROR] 
> testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
>  Time elapsed: 0.034 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
> {color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}
> 

[jira] [Commented] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478243#comment-16478243
 ] 

Íñigo Goiri commented on HDFS-13570:


[^HDFS-13570.001.patch] looks good.
We can tackle the checkstyle and remove the unused import.

> TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows
> 
>
> Key: HDFS-13570
> URL: https://issues.apache.org/jira/browse/HDFS-13570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13570.000.patch, HDFS-13570.001.patch
>
>
> [Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> that the following 20 test cases fail on Windows with same error "Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  is not a valid DFS filename.":
> {code:java}
> TestQuotaByStorageType#testStorageSpaceQuotaWithRepFactor
> TestQuotaByStorageType#testStorageSpaceQuotaPerQuotaClear
> TestQuotaByStorageType#testStorageSpaceQuotaWithWarmPolicy
> TestQuota#testQuotaCommands
> TestQuota#testSetAndClearSpaceQuotaRegular
> TestQuota#testQuotaByStorageType
> TestQuota#testNamespaceCommands
> TestQuota#testSetAndClearSpaceQuotaByStorageType
> TestQuota#testMaxSpaceQuotas
> TestQuota#testSetAndClearSpaceQuotaNoAccess
> TestQuota#testSpaceQuotaExceptionOnAppend
> TestQuota#testSpaceCommands
> TestQuota#testBlockAllocationAdjustsUsageConservatively
> TestQuota#testMultipleFilesSmallerThanOneBlock
> TestQuota#testHugeFileCount
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetSpaceQuotaNegativeNumber
> TestQuota#testSpaceQuotaExceptionOnClose
> TestQuota#testSpaceQuotaExceptionOnFlush
> TestDFSOutputStream#testPreventOverflow{code}
> There are 2 test cases failing with error "It should be one line error 
> message like: clrSpaceQuota: Directory does not exist:  directory> expected:<1> but was:<2>"
> {code:java}
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetAndClearSpaceQuotaDirecotryNotExist
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478238#comment-16478238
 ] 

genericqa commented on HDFS-13570:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 88 unchanged - 0 fixed = 89 total (was 88) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13570 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923768/HDFS-13570.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 344b42af6a54 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e3b7d7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478232#comment-16478232
 ] 

Anbang Hu commented on HDFS-13559:
--

Thanks [~elgoiri] for the good suggestion. I closed all filesystem in 
ctx.close() in the newly uploaded [^HDFS-13559.001.patch]

> TestBlockScanner does not close TestContext properly
> 
>
> Key: HDFS-13559
> URL: https://issues.apache.org/jira/browse/HDFS-13559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13559.000.patch, HDFS-13559.001.patch
>
>
> Without closing ctx in testMarkSuspectBlock, testIgnoreMisplacedBlock, 
> testAppendWhileScanning, some tests fail on Windows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0, 
> Time elapsed: 113.398 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] 
> testScanAllBlocksWithRescan(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner)
>  Time elapsed: 0.031 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:102){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksImpl(TestBlockScanner.java:366){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksWithRescan(TestBlockScanner.java:435){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestBlockScanner.testAppendWhileScanning:899 ╗ IO 
> Could not fully delete E:\OS...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testCorruptBlockHandling:488 ╗ IO 
> Could not fully delete E:\O...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testDatanodeCursor:531 ╗ IO Could not 
> fully delete E:\OSS\had...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testMarkSuspectBlock:717 ╗ IO Could 
> not fully delete E:\OSS\h...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testScanAllBlocksWithRescan:435->testScanAllBlocksImpl:366 ╗ 
> IO{color}
> {color:#d04437}[ERROR] TestBlockScanner.testScanRateLimit:450 ╗ IO Could not 
> fully delete E:\OSS\hado...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithCaching:261->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithoutCaching:256->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13559:
-
Attachment: HDFS-13559.001.patch

> TestBlockScanner does not close TestContext properly
> 
>
> Key: HDFS-13559
> URL: https://issues.apache.org/jira/browse/HDFS-13559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13559.000.patch, HDFS-13559.001.patch
>
>
> Without closing ctx in testMarkSuspectBlock, testIgnoreMisplacedBlock, 
> testAppendWhileScanning, some tests fail on Windows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0, 
> Time elapsed: 113.398 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
> {color:#d04437}[ERROR] 
> testScanAllBlocksWithRescan(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner)
>  Time elapsed: 0.031 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:102){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksImpl(TestBlockScanner.java:366){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksWithRescan(TestBlockScanner.java:435){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}...{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestBlockScanner.testAppendWhileScanning:899 ╗ IO 
> Could not fully delete E:\OS...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testCorruptBlockHandling:488 ╗ IO 
> Could not fully delete E:\O...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testDatanodeCursor:531 ╗ IO Could not 
> fully delete E:\OSS\had...{color}
> {color:#d04437}[ERROR] TestBlockScanner.testMarkSuspectBlock:717 ╗ IO Could 
> not fully delete E:\OSS\h...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testScanAllBlocksWithRescan:435->testScanAllBlocksImpl:366 ╗ 
> IO{color}
> {color:#d04437}[ERROR] TestBlockScanner.testScanRateLimit:450 ╗ IO Could not 
> fully delete E:\OSS\hado...{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithCaching:261->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[ERROR] 
> TestBlockScanner.testVolumeIteratorWithoutCaching:256->testVolumeIteratorImpl:169
>  ╗ IO{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478206#comment-16478206
 ] 

Xiao Chen edited comment on HDFS-13281 at 5/16/18 10:32 PM:


Thanks [~shahrs87] for continuing on this.

{code}
os = fs.create(reservedRawP2Path);
// Write un-encrypted bytes to reserved raw stream.
os.write(unEncryptedBytes.getBytes());
{code}
I feel unit tests should serve as examples. So here should we write 
'encryptedBytes' to a reserved raw, and verify when reading it from {{p2}}, we 
can get "hello world"? (It's technically correct that you can write uncrypted 
to a raw, but I don't see the point of doing so practically.)

+1 once this is done.


was (Author: xiaochen):
Thanks [~shahrs87] for continuing on this.

{code}
os = fs.create(reservedRawP2Path);
// Write un-encrypted bytes to reserved raw stream.
os.write(unEncryptedBytes.getBytes());
{code}
I feel unit tests should serve as examples. So here should we write 
'encryptedBytes' to a reserved raw, and verify when reading it from {{p2}}, we 
can get "hello world"? (It's technically correct that you can write uncrypted 
to a raw, but I don't see the point of doing so practically.)

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478206#comment-16478206
 ] 

Xiao Chen commented on HDFS-13281:
--

Thanks [~shahrs87] for continuing on this.

{code}
os = fs.create(reservedRawP2Path);
// Write un-encrypted bytes to reserved raw stream.
os.write(unEncryptedBytes.getBytes());
{code}
I feel unit tests should serve as examples. So here should we write 
'encryptedBytes' to a reserved raw, and verify when reading it from {{p2}}, we 
can get "hello world"? (It's technically correct that you can write uncrypted 
to a raw, but I don't see the point of doing so practically.)

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13512) WebHdfs getFileStatus doesn't return ecPolicy

2018-05-16 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478202#comment-16478202
 ] 

Ajay Kumar commented on HDFS-13512:
---

[~arpitagarwal] thanks for review and commit. [~hanishakoneru], [~shahrs87] 
thanks for reviews.

> WebHdfs getFileStatus doesn't return ecPolicy
> -
>
> Key: HDFS-13512
> URL: https://issues.apache.org/jira/browse/HDFS-13512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13512.00.patch, HDFS-13512.01.patch, 
> HDFS-13512.02.patch, HDFS-13512.03.patch, HDFS-13512.04.patch, 
> HDFS-13512.05.patch
>
>
> Currently LISTSTATUS call to WebHdfs returns a json. These jsonArray elements 
> do have the ecPolicy name.
> But when WebHdfsFileSystem converts it back into a FileStatus object, the 
> ecPolicy is not added. This is because the json contains only the ecPolicy 
> name and this name is not sufficient to decode it back to ErasureCodingPolicy 
> object.
> While converting json back to HdfsFileStatus we should set ecPolicyName 
> whenever it is set for give file/dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-05-16 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13399 started by Plamen Jeliazkov.
---
> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, 
> HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch, 
> HDFS-13399-HDFS-12943.005.patch, HDFS-13399-HDFS-12943.006.patch, 
> HDFS-13399-HDFS-12943.007.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-79) Remove ReportState from SCMHeartbeatRequestProto

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-79?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478117#comment-16478117
 ] 

genericqa commented on HDDS-79:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 30m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 28s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 26s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.container.closer.TestContainerCloser |
|   | hadoop.ozone.container.replication.TestContainerSupervisor |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | 

[jira] [Updated] (HDFS-13555) TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on Windows

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13555:
---
Attachment: (was: .gitattributes)

> TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on 
> Windows
> 
>
> Key: HDFS-13555
> URL: https://issues.apache.org/jira/browse/HDFS-13555
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13555.000.patch
>
>
> Although TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs has 
> 180s timeout, it is overridden by global timeout
> {code:java}
> @Rule
>  public Timeout testTimeout = new Timeout(3);{code}
> {color:#d04437}[INFO] Running org.apache.hadoop.net.TestNetworkTopology{color}
> {color:#d04437}[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 30.741 s <<< FAILURE! - in 
> org.apache.hadoop.net.TestNetworkTopology{color}
> {color:#d04437}[ERROR] 
> testInvalidNetworkTopologiesNotCachedInHdfs(org.apache.hadoop.net.TestNetworkTopology)
>  Time elapsed: 30.009 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 3 
> milliseconds{color}
> {color:#d04437} at java.lang.Object.wait(Native Method){color}
> {color:#d04437} at java.lang.Thread.join(Thread.java:1257){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestNetworkTopology>Object.wait:-2 ╗ test timed out 
> after 3 milliseconds{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478112#comment-16478112
 ] 

Íñigo Goiri commented on HDFS-13552:


I think this needs to get a way to set the DNs to resolve the canonical name 
fast enough.

> TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead
>  time out on Windows
> ---
>
> Key: HDFS-13552
> URL: https://issues.apache.org/jira/browse/HDFS-13552
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13552.000.patch
>
>
> {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 20.073 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] 
> testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.005 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.001 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:224){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:179){color}
> {color:#d04437} 

[jira] [Updated] (HDFS-13555) TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13555:
-
Attachment: .gitattributes

> TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on 
> Windows
> 
>
> Key: HDFS-13555
> URL: https://issues.apache.org/jira/browse/HDFS-13555
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: .gitattributes, HDFS-13555.000.patch
>
>
> Although TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs has 
> 180s timeout, it is overridden by global timeout
> {code:java}
> @Rule
>  public Timeout testTimeout = new Timeout(3);{code}
> {color:#d04437}[INFO] Running org.apache.hadoop.net.TestNetworkTopology{color}
> {color:#d04437}[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 30.741 s <<< FAILURE! - in 
> org.apache.hadoop.net.TestNetworkTopology{color}
> {color:#d04437}[ERROR] 
> testInvalidNetworkTopologiesNotCachedInHdfs(org.apache.hadoop.net.TestNetworkTopology)
>  Time elapsed: 30.009 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 3 
> milliseconds{color}
> {color:#d04437} at java.lang.Object.wait(Native Method){color}
> {color:#d04437} at java.lang.Thread.join(Thread.java:1257){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] TestNetworkTopology>Object.wait:-2 ╗ test timed out 
> after 3 milliseconds{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 
> 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478102#comment-16478102
 ] 

Íñigo Goiri commented on HDFS-13560:


I think the approach in [^HDFS-13560.002.patch] is correct.
Let's see what Yetus says.

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> 

[jira] [Updated] (HDFS-13248) RBF: Namenode need to choose block location for the client

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13248:
---
Status: Open  (was: Patch Available)

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478095#comment-16478095
 ] 

Anbang Hu commented on HDFS-13552:
--

The stack trace indicates the same issue as mentioned in 
[HADOOP-15467|https://issues.apache.org/jira/browse/HADOOP-15467?focusedCommentId=16478088=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478088].

 

> TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead
>  time out on Windows
> ---
>
> Key: HDFS-13552
> URL: https://issues.apache.org/jira/browse/HDFS-13552
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13552.000.patch
>
>
> {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 20.073 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestFileAppend{color}
> {color:#d04437}[ERROR] 
> testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.005 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
> {color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
> 10.001 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 1 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HDFS-10803) TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available

2018-05-16 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478096#comment-16478096
 ] 

Rushabh S Shah commented on HDFS-10803:
---

[~hanishakoneru], [~linyiqun]: ^^

> TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails 
> intermittently due to no free space available
> 
>
> Key: HDFS-10803
> URL: https://issues.apache.org/jira/browse/HDFS-10803
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HDFS-10803.001.patch
>
>
> The test {{TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools}} 
> fails intermittently. The stack 
> infos(https://builds.apache.org/job/PreCommit-HDFS-Build/16534/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithMultipleNameNodes/testBalancing2OutOf3Blockpools/):
> {code}
> java.io.IOException: Creating block, no free space available
>   at 
> org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.(SimulatedFSDataset.java:151)
>   at 
> org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset.injectBlocks(SimulatedFSDataset.java:580)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.injectBlocks(MiniDFSCluster.java:2679)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.unevenDistribution(TestBalancerWithMultipleNameNodes.java:405)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancing2OutOf3Blockpools(TestBalancerWithMultipleNameNodes.java:516)
> {code}
> The error message means that the datanode's capacity has used up and there is 
> no other space to create a new file block. 
> I looked into the code, I found the main reason seemed that the 
> {{capacities}}  for cluster is not correctly constructed in the second 
> cluster startup before preparing to redistribute blocks in test.
> The related code:
> {code}
>   // Here we do redistribute blocks nNameNodes times for each node,
>   // we need to adjust the capacities. Otherwise it will cause the no 
>   // free space errors sometimes.
>   final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
>   .nnTopology(MiniDFSNNTopology.simpleFederatedTopology(nNameNodes))
>   .numDataNodes(nDataNodes)
>   .racks(racks)
>   .simulatedCapacities(newCapacities)
>   .format(false)
>   .build();
>   LOG.info("UNEVEN 11");
> ...
> for(int n = 0; n < nNameNodes; n++) {
>   // redistribute blocks
>   final Block[][] blocksDN = TestBalancer.distributeBlocks(
>   blocks[n], s.replication, distributionPerNN);
> 
>   for(int d = 0; d < blocksDN.length; d++)
> cluster.injectBlocks(n, d, Arrays.asList(blocksDN[d]));
>   LOG.info("UNEVEN 13: n=" + n);
> }
> {code}
> And that means the totalUsed value has been increased as 
> {{nNameNodes*usedSpacePerNN}} rather than {{usedSpacePerNN}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13278) RBF: Correct the logic of mount validate to avoid the bad mountPoint

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478093#comment-16478093
 ] 

Íñigo Goiri commented on HDFS-13278:


[~maobaolong], is this still an issue?

> RBF: Correct the logic of mount validate to avoid the bad mountPoint
> 
>
> Key: HDFS-13278
> URL: https://issues.apache.org/jira/browse/HDFS-13278
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Priority: Major
>  Labels: RBF
>
> Correct the logic of mount validate to avoid the bad mountPoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13347:
---
Description: 
Getting the datanode reports is an expensive operation and can be executed very 
frequently by the UI and watchdogs.
To reduce this load, this information is cached in the Router.
This cached information expires after some time (configurable).

  was:
Getting the datanode reports is an expensive operation and can be executed very 
frequently by the UI and watchdogs.
To reduce this load, this information should be cached in the Router.


> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs.
> To reduce this load, this information is cached in the Router.
> This cached information expires after some time (configurable).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13347:
---
Description: 
Getting the datanode reports is an expensive operation and can be executed very 
frequently by the UI and watchdogs.
To reduce this load, this information should be cached in the Router.

  was:Getting the datanode reports is an expensive operation and can be 
executed very frequently by the UI and watchdogs. We should cache this 
information.


> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs.
> To reduce this load, this information should be cached in the Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478089#comment-16478089
 ] 

Íñigo Goiri commented on HDFS-12615:


Given that we discarded reverting, I'm not sure a branch makes sense for what 
is left in this JIRA.
The ones we are on time to amend are already in their own umbrella and will 
move into their own branch accordingly.
If we have a branch for what is left open here, we will have a branch with a 
topic like "stuffs on RBF".
Similarly, a design doc for such "topic" would have a similar issue.
Given that, I'm proposing to take the big chunks in this JIRA and if they are 
large enough, move them to their own umbrella with a proper design doc.
If they are too small for a full doc, then just fix the description.

So far, I've gone through them:
* HDFS-13044: this is pretty big () and I created HDFS-13575 to add there a 
proper design doc.
* HDFS-13484: this is around 4/5 JIRAs so not sure an umbrella makes sense but 
I'll add a design doc to HDFS-13484.
* HDFS-13224: this one is also 4/5 JIRAs but it might be worth having its own 
umbrella. For sure a design doc. Thoughts?

I'm also going over their descriptions trying to make them "more descriptive".


BTW, as I'm going through all these patches and I have to say that size in 
bytes is not a very good indicator.
For example, HDFS-13478 is 90KB of nothing (defining RPC interfaces wit PB 
implementations is extremely verbose).
In any case, right now most patches are <10KB and the only outlier is 
HDFS-13215 which is a refactor.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478060#comment-16478060
 ] 

Anbang Hu commented on HDFS-13560:
--

Thanks [~elgoiri] for the suggestion. I removed timeout change in 
[^HDFS-13560.002.patch].

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> 

[jira] [Updated] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13560:
-
Attachment: HDFS-13560.002.patch

> Insufficient system resources exist to complete the requested service for 
> some tests on Windows
> ---
>
> Key: HDFS-13560
> URL: https://issues.apache.org/jira/browse/HDFS-13560
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, 
> HDFS-13560.002.patch
>
>
> On Windows, there are 30 tests in HDFS component giving error like the 
> following:
>  {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, 
> Time elapsed: 50.149 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
> {color:#d04437} [ERROR] 
> testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
>  Time elapsed: 16.513 s <<< ERROR!{color}
> {color:#d04437} 1450: Insufficient system resources exist to complete the 
> requested service.{color}
> {color:#d04437}at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
> Method){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#33}The involved tests are{color}
> {code:java}
> TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocksAreSaved,TestDirectoryScanner#testDeleteBlockOnTransientStorage,TestDirectoryScanner#testRetainBlockOnPersistentStorage,TestDirectoryScanner#testExceptionHandlingWhileDirectoryScan,TestDirectoryScanner#testDirectoryScanner,TestDirectoryScanner#testThrottling,TestDirectoryScanner#testDirectoryScannerInFederatedCluster,TestNameNodeMXBean#testNameNodeMXBeanInfo{code}
> {color:#d04437}[ERROR] Errors:{color}
> 

[jira] [Updated] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13480:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-13575

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13480:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-12615)

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478054#comment-16478054
 ] 

Anbang Hu commented on HDFS-13570:
--

Re-uploading [^HDFS-13570.001.patch] to trigger Yetus build.

> TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows
> 
>
> Key: HDFS-13570
> URL: https://issues.apache.org/jira/browse/HDFS-13570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13570.000.patch, HDFS-13570.001.patch
>
>
> [Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> that the following 20 test cases fail on Windows with same error "Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  is not a valid DFS filename.":
> {code:java}
> TestQuotaByStorageType#testStorageSpaceQuotaWithRepFactor
> TestQuotaByStorageType#testStorageSpaceQuotaPerQuotaClear
> TestQuotaByStorageType#testStorageSpaceQuotaWithWarmPolicy
> TestQuota#testQuotaCommands
> TestQuota#testSetAndClearSpaceQuotaRegular
> TestQuota#testQuotaByStorageType
> TestQuota#testNamespaceCommands
> TestQuota#testSetAndClearSpaceQuotaByStorageType
> TestQuota#testMaxSpaceQuotas
> TestQuota#testSetAndClearSpaceQuotaNoAccess
> TestQuota#testSpaceQuotaExceptionOnAppend
> TestQuota#testSpaceCommands
> TestQuota#testBlockAllocationAdjustsUsageConservatively
> TestQuota#testMultipleFilesSmallerThanOneBlock
> TestQuota#testHugeFileCount
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetSpaceQuotaNegativeNumber
> TestQuota#testSpaceQuotaExceptionOnClose
> TestQuota#testSpaceQuotaExceptionOnFlush
> TestDFSOutputStream#testPreventOverflow{code}
> There are 2 test cases failing with error "It should be one line error 
> message like: clrSpaceQuota: Directory does not exist:  directory> expected:<1> but was:<2>"
> {code:java}
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetAndClearSpaceQuotaDirecotryNotExist
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13570:
-
Attachment: HDFS-13570.001.patch

> TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows
> 
>
> Key: HDFS-13570
> URL: https://issues.apache.org/jira/browse/HDFS-13570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13570.000.patch, HDFS-13570.001.patch
>
>
> [Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> that the following 20 test cases fail on Windows with same error "Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  is not a valid DFS filename.":
> {code:java}
> TestQuotaByStorageType#testStorageSpaceQuotaWithRepFactor
> TestQuotaByStorageType#testStorageSpaceQuotaPerQuotaClear
> TestQuotaByStorageType#testStorageSpaceQuotaWithWarmPolicy
> TestQuota#testQuotaCommands
> TestQuota#testSetAndClearSpaceQuotaRegular
> TestQuota#testQuotaByStorageType
> TestQuota#testNamespaceCommands
> TestQuota#testSetAndClearSpaceQuotaByStorageType
> TestQuota#testMaxSpaceQuotas
> TestQuota#testSetAndClearSpaceQuotaNoAccess
> TestQuota#testSpaceQuotaExceptionOnAppend
> TestQuota#testSpaceCommands
> TestQuota#testBlockAllocationAdjustsUsageConservatively
> TestQuota#testMultipleFilesSmallerThanOneBlock
> TestQuota#testHugeFileCount
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetSpaceQuotaNegativeNumber
> TestQuota#testSpaceQuotaExceptionOnClose
> TestQuota#testSpaceQuotaExceptionOnFlush
> TestDFSOutputStream#testPreventOverflow{code}
> There are 2 test cases failing with error "It should be one line error 
> message like: clrSpaceQuota: Directory does not exist:  directory> expected:<1> but was:<2>"
> {code:java}
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetAndClearSpaceQuotaDirecotryNotExist
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-69) Add checkBucketAccess to OzoneManger

2018-05-16 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-69:

Labels:   (was: OzonePostMerge)

> Add checkBucketAccess to OzoneManger
> 
>
> Key: HDDS-69
> URL: https://issues.apache.org/jira/browse/HDDS-69
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-12147-HDFS-7240.000.patch, 
> HDFS-12147-HDFS-7240.001.patch
>
>
> Checks if the caller has access to a given bucket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-80) SendContainerCommand can be removed as it's no more used by SCM

2018-05-16 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-80?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-80:

Attachment: HDDS-80.000.patch

> SendContainerCommand can be removed as it's no more used by SCM
> ---
>
> Key: HDDS-80
> URL: https://issues.apache.org/jira/browse/HDDS-80
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-80.000.patch
>
>
> After the removal of {{ReportState}} from heartbeat we no longer need 
> {{SendContainerCommand}} which is used by SCM to ask datanode to send 
> container report.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477959#comment-16477959
 ] 

Arpit Agarwal commented on HDFS-12615:
--

Hi [~elgoiri], splitting patches sounds like a lot of work. Instead, is it 
practical to bring in the large phase 2 changes via a single feature branch 
with a design doc (security being an exception)? If that is impractical, can we 
have a design or implementation note for each large patch and ensure that 
future changes occur in a branch.

A branch also makes it easier to see the impact of a large feature on the 
existing code by looking at the merge patch. e.g. proposals like HDFS-13248.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477951#comment-16477951
 ] 

Anu Engineer commented on HDFS-12615:
-

Moving  JIRAs out makes no difference, IMHO; But if you are moving them into 
its own branch +1.

Thank you for addressing the feedback.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13575) RBF: Track Router state

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477949#comment-16477949
 ] 

Íñigo Goiri commented on HDFS-13575:


I need to add design doc for this feature.

> RBF: Track Router state
> ---
>
> Key: HDFS-13575
> URL: https://issues.apache.org/jira/browse/HDFS-13575
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Priority: Minor
>
> Currently, RBF tracks the state of the Namenodes and the Nameservices.
> To track the full federation status, it should also track the state of the 
> Routers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13484) RBF: Disable Nameservices from the federation

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13484:
---
Description: HDFS-13478 introduced the Disabled Nameservice store. We 
should disable the access to disabled subclusters.  (was: HDFS-13478 introduced 
the Decommission store. We should disable the access to decommissioned 
subclusters.)

> RBF: Disable Nameservices from the federation
> -
>
> Key: HDFS-13484
> URL: https://issues.apache.org/jira/browse/HDFS-13484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13484.000.patch, HDFS-13484.001.patch, 
> HDFS-13484.002.patch, HDFS-13484.003.patch, HDFS-13484.004.patch, 
> HDFS-13484.005.patch, HDFS-13484.006.patch, HDFS-13484.007.patch, 
> HDFS-13484.008.patch, HDFS-13484.009.patch
>
>
> HDFS-13478 introduced the Disabled Nameservice store. We should disable the 
> access to disabled subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477928#comment-16477928
 ] 

Íñigo Goiri commented on HDFS-12615:


bq. Could you please explain what  you are proposing to do ?
I'm putting them in their own umbrella and then we can decide if you guys want 
to start branches for those or not.
With this, we remove the clutter from this JIRA while being able to to track 
them; just accounting for now.
In addition, we can add detailed descriptions based on your thoughts.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-38) Add SCMNodeStorage map in SCM class to store storage statistics per Datanode

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-38?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477927#comment-16477927
 ] 

genericqa commented on HDDS-38:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 19s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.replication.TestContainerSupervisor |
|   | hadoop.hdds.scm.container.closer.TestContainerCloser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-38 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923739/HDDS-38.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 4241a1e58959 

[jira] [Created] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-16 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-81:
---

 Summary: Moving ContainerReport inside Datanode heartbeat
 Key: HDDS-81
 URL: https://issues.apache.org/jira/browse/HDDS-81
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Nanda kumar
Assignee: Nanda kumar


{{sendContainerReport}} is a separate RPC call now, as part of heartbeat 
refactoring ContainerReport will be moved into heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13475:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-12615)

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13475:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-13575

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use

2018-05-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13241:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-12615)

> RBF: TestRouterSafemode failed if the port  is in use
> -
>
> Key: HDFS-13241
> URL: https://issues.apache.org/jira/browse/HDFS-13241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13241.001.patch, HDFS-13241.002.patch
>
>
> TestRouterSafemode failed if the port  is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >