[jira] [Commented] (HDFS-10962) TestRequestHedgingProxyProvider is flaky

2016-10-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547741#comment-15547741
 ] 

Xiao Chen commented on HDFS-10962:
--

Thanks Andrew for the fix.

It makes sense, but since we can't repro I feel a little unconfident. How do 
you feel about also adding some debug logs into {{RequestHedgingProxyProvider}} 
and set log level to debug in the test? That will give us more to investigate 
if your fix didn't solve it 100%.

> TestRequestHedgingProxyProvider is flaky
> 
>
> Key: HDFS-10962
> URL: https://issues.apache.org/jira/browse/HDFS-10962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10962.001.patch
>
>
> This test fails occasionally with an error like this:
> {noformat}
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails
> Error Message
> Wanted but not invoked:
> namenodeProtocols.getStats();
> -> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Actually, there were zero interactions with this mock.
> Stacktrace
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> namenodeProtocols.getStats();
> -> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Actually, there were zero interactions with this mock.
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Standard Output
> 2016-09-26 15:26:02,234 WARN  hdfs.DFSUtil 
> (DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
> mycluster-26780990 remains unresolved for ID nn1.  Check your hdfs-site.xml 
> file to ensure namenodes are configured properly.
> 2016-09-26 15:26:02,340 WARN  hdfs.DFSUtil 
> (DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
> mycluster-26780990 remains unresolved for ID nn2.  Check your hdfs-site.xml 
> file to ensure namenodes are configured properly.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10962) TestRequestHedgingProxyProvider is flaky

2016-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10962:
---
Attachment: HDFS-10962.001.patch

I was unable to repro this locally after running it 100 times with load, but 
looking at the code I suspect it's due to the thread pool executor being 
cancelled before the badmock gets called.

So, I attempted to fix this in two ways:
* Reverse the order of badmock and goodmock so badmock gets called first
* Added a little sleep before goodmock responds to give time for the badmock to 
also get called.

> TestRequestHedgingProxyProvider is flaky
> 
>
> Key: HDFS-10962
> URL: https://issues.apache.org/jira/browse/HDFS-10962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10962.001.patch
>
>
> This test fails occasionally with an error like this:
> {noformat}
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails
> Error Message
> Wanted but not invoked:
> namenodeProtocols.getStats();
> -> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Actually, there were zero interactions with this mock.
> Stacktrace
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> namenodeProtocols.getStats();
> -> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Actually, there were zero interactions with this mock.
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Standard Output
> 2016-09-26 15:26:02,234 WARN  hdfs.DFSUtil 
> (DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
> mycluster-26780990 remains unresolved for ID nn1.  Check your hdfs-site.xml 
> file to ensure namenodes are configured properly.
> 2016-09-26 15:26:02,340 WARN  hdfs.DFSUtil 
> (DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
> mycluster-26780990 remains unresolved for ID nn2.  Check your hdfs-site.xml 
> file to ensure namenodes are configured properly.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10962) TestRequestHedgingProxyProvider is flaky

2016-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10962:
---
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0)
  Status: Patch Available  (was: Open)

> TestRequestHedgingProxyProvider is flaky
> 
>
> Key: HDFS-10962
> URL: https://issues.apache.org/jira/browse/HDFS-10962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10962.001.patch
>
>
> This test fails occasionally with an error like this:
> {noformat}
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails
> Error Message
> Wanted but not invoked:
> namenodeProtocols.getStats();
> -> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Actually, there were zero interactions with this mock.
> Stacktrace
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> namenodeProtocols.getStats();
> -> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Actually, there were zero interactions with this mock.
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
> Standard Output
> 2016-09-26 15:26:02,234 WARN  hdfs.DFSUtil 
> (DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
> mycluster-26780990 remains unresolved for ID nn1.  Check your hdfs-site.xml 
> file to ensure namenodes are configured properly.
> 2016-09-26 15:26:02,340 WARN  hdfs.DFSUtil 
> (DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
> mycluster-26780990 remains unresolved for ID nn2.  Check your hdfs-site.xml 
> file to ensure namenodes are configured properly.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10962) TestRequestHedgingProxyProvider is flaky

2016-10-04 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-10962:
--

 Summary: TestRequestHedgingProxyProvider is flaky
 Key: HDFS-10962
 URL: https://issues.apache.org/jira/browse/HDFS-10962
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.4
Reporter: Andrew Wang
Assignee: Andrew Wang


This test fails occasionally with an error like this:

{noformat}
org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails


Error Message

Wanted but not invoked:
namenodeProtocols.getStats();
-> at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
Actually, there were zero interactions with this mock.
Stacktrace

org.mockito.exceptions.verification.WantedButNotInvoked: 
Wanted but not invoked:
namenodeProtocols.getStats();
-> at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
Actually, there were zero interactions with this mock.

at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider.testHedgingWhenOneFails(TestRequestHedgingProxyProvider.java:78)
Standard Output

2016-09-26 15:26:02,234 WARN  hdfs.DFSUtil 
(DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
mycluster-26780990 remains unresolved for ID nn1.  Check your hdfs-site.xml 
file to ensure namenodes are configured properly.
2016-09-26 15:26:02,340 WARN  hdfs.DFSUtil 
(DFSUtil.java:getAddressesForNameserviceId(689)) - Namenode for 
mycluster-26780990 remains unresolved for ID nn2.  Check your hdfs-site.xml 
file to ensure namenodes are configured properly.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-10-04 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547529#comment-15547529
 ] 

Arpit Agarwal commented on HDFS-10301:
--

Konst, please hold off committing for a day or two. The approach in the latest 
patch sounds reasonable but I'd like to review it once.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.015.patch, HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10745) Directly resolve paths into INodesInPath

2016-10-04 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547080#comment-15547080
 ] 

Zhe Zhang edited comment on HDFS-10745 at 10/5/16 3:57 AM:
---

Thanks [~shv] for taking a look! Attaching v5 patch to add {{checkAccess}} 
change. Will commit to branch-2.7 after Jenkins verification.

{{setStoragePolicy}} change is being handled in HDFS-10955.


was (Author: zhz):
Thanks [~shv] for taking a look! Attaching v5 patch to add {{checkAccess}} 
change. Will commit to branch-2.7 after Jenkins verification.

{{setStoragePolicy}} change is missing from trunk, I'll create a separate JIRA 
to address.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745-branch-2.7.01.patch, 
> HDFS-10745-branch-2.7.02.patch, HDFS-10745-branch-2.7.03.patch, 
> HDFS-10745-branch-2.7.04.patch, HDFS-10745-branch-2.7.05.patch, 
> HDFS-10745-branch-2.7.patch, HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock

2016-10-04 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547455#comment-15547455
 ] 

Brahma Reddy Battula commented on HDFS-10896:
-

Hi [~zhz]

Seems you are not updating in changes.txt for branch-2.7.I think,As we maintain 
changes.txt we need to update.  Please see HADOOP-13670

> Move lock logging logic from FSNamesystem into FSNamesystemLock
> ---
>
> Key: HDFS-10896
> URL: https://issues.apache.org/jira/browse/HDFS-10896
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>  Labels: logging, namenode
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10896-branch-2.7.004.patch, HDFS-10896.000.patch, 
> HDFS-10896.001.patch, HDFS-10896.002.patch, HDFS-10896.003.patch, 
> HDFS-10896.004.patch
>
>
> There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this 
> subtask's story HDFS-10475) which are adding/improving logging/metrics around 
> the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, 
> which is polluting the namesystem with ThreadLocal variables, timing 
> counters, etc. which are only relevant to the lock itself and the number of 
> these increases as the logging/metrics become more sophisticated. It would be 
> best to move these all into FSNamesystemLock to keep the metrics/logging tied 
> directly to the item of interest. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10925) Cache symlinkString in INodeSymlink

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547376#comment-15547376
 ] 

Hadoop QA commented on HDFS-10925:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831667/HDFS-10925.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b366c6525765 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 31f8da2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17014/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17014/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17014/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Cache symlinkString in INodeSymlink
> ---
>
> Key: HDFS-10925
> URL: https://issues.apache.org/jira/browse/HDFS-10925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>  

[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547368#comment-15547368
 ] 

Hadoop QA commented on HDFS-10745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 522 unchanged - 8 fixed = 529 total (was 530) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 9079 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  3m 
39s{color} | {color:red} The patch 117 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
| JDK v1.7.0_111 Failed junit tests | 

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-10-04 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547365#comment-15547365
 ] 

Konstantin Shvachko commented on HDFS-10301:


The patch looks good. Tests pass for me. Will give people a day or so to 
review. LMK if you need more.

Agree with Vinitha, as it is currently implemented for per-storage-BRs the race 
is only between last BR RPCs. But because they are identical there is no 
difference which one will be processed and which one discarded. It is actually 
good that only one is processed.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.015.patch, HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-10-04 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547335#comment-15547335
 ] 

Konstantin Shvachko commented on HDFS-10745:


Sorry, just noticed one more thing. In {{abandonBlock()}} the resolution 
{{dir.resolvePath()}} should be done under the lock, not outside. 
Checked trunk, it is done there inside {{FSDirWriteFileOp}} and therefore under 
the lock automatically.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745-branch-2.7.01.patch, 
> HDFS-10745-branch-2.7.02.patch, HDFS-10745-branch-2.7.03.patch, 
> HDFS-10745-branch-2.7.04.patch, HDFS-10745-branch-2.7.05.patch, 
> HDFS-10745-branch-2.7.patch, HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547309#comment-15547309
 ] 

Hadoop QA commented on HDFS-10922:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| Timed out junit tests | org.apache.hadoop.fs.TestTrash |
|   | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831640/HDFS-10922.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 106a81be24fe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17011/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17011/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-6532) Intermittent test failure org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

2016-10-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547303#comment-15547303
 ] 

Yiqun Lin commented on HDFS-6532:
-

Thanks [~xiaochen] for continous work on this. Some comments from me:
1.{quote}
maybe we should just add the test timeout
{quote}
I am +0 for increasing the timeout since that I think this seems not the best 
way.
2.As the comments said, the socket timeout happens when the test runs failed. 
Here the socket timeout is normal? Or there is some other exception to trigger 
this?

> Intermittent test failure 
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> --
>
> Key: HDFS-6532
> URL: https://issues.apache.org/jira/browse/HDFS-6532
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yiqun Lin
> Attachments: HDFS-6532.001.patch, HDFS-6532.002.patch, 
> PreCommit-HDFS-Build #16770 test - testCorruptionDuringWrt [Jenkins].pdf, 
> TEST-org.apache.hadoop.hdfs.TestCrcCorruption-select_timeout.xml, 
> TEST-org.apache.hadoop.hdfs.TestCrcCorruption.xml
>
>
> Per https://builds.apache.org/job/Hadoop-Hdfs-trunk/1774/testReport, we had 
> the following failure. Local rerun is successful
> {code}
> Regression
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> Failing for the past 1 build (Since Failed#1774 )
> Took 50 sec.
> Error Message
> test timed out after 5 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 5 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2024)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2008)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2107)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:98)
>   at 
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:133)
> {code}
> See relevant exceptions in log
> {code}
> 2014-06-14 11:56:15,283 WARN  datanode.DataNode 
> (BlockReceiver.java:verifyChunks(404)) - Checksum error in block 
> BP-1675558312-67.195.138.30-1402746971712:blk_1073741825_1001 from 
> /127.0.0.1:41708
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_-1139495951_8 at 64512 exp: 1379611785 got: -12163112
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:353)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:284)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:402)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:537)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:734)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234)
>   at java.lang.Thread.run(Thread.java:662)
> 2014-06-14 11:56:15,285 WARN  datanode.DataNode 
> (BlockReceiver.java:run(1207)) - IOException in BlockReceiver.run(): 
> java.io.IOException: Shutting down writer and responder due to a checksum 
> error in received data. The error response has been sent upstream.
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1278)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1199)
>   at java.lang.Thread.run(Thread.java:662)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10925) Cache symlinkString in INodeSymlink

2016-10-04 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10925:
-
Attachment: HDFS-10925.003.patch

Thanks [~kihwal] for the comment. I'm sorry for the misunderstand for that. 
Attach a new patch to make a change.

> Cache symlinkString in INodeSymlink
> ---
>
> Key: HDFS-10925
> URL: https://issues.apache.org/jira/browse/HDFS-10925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10925.001.patch, HDFS-10925.002.patch, 
> HDFS-10925.003.patch
>
>
> In {{INodeSymlink}}'s construct method, it will transfer the input symlink 
> string to a byte array. If we want to invoke 
> {{INodeSymlink#getSymlinkString}}, it will transfer the byte array to the 
> string again. Since we don't cache symlinkString  here, it will do the 
> {{DFSUtil.bytes2String}} method every time. It seems not efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10819) BlockManager fails to store a good block for a datanode storage after it reported a corrupt block — block replication stuck

2016-10-04 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547214#comment-15547214
 ] 

Manoj Govindassamy commented on HDFS-10819:
---

[~andrew.wang],

{quote} Also curious, would invalidation eventually fix this case, or is it 
truly stuck? {quote}
* I find it totally stuck in this test case as we have only 3 DNs and the 
expected Replication factor is also 3. 
* Block invalidation was not going through and the replication factor failed to 
catch up. The reason why Block Invalidation at DataNode didn't go through was 
because the disk which held the block is already closed as it was removed.

{noformat}
 730 2016-10-04 15:52:30,709 WARN  impl.FsDatasetImpl 
(FsDatasetImpl.java:invalidate(1990)) - Volume 
/Users/manoj/work/cdh-hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
 is closed, ignore the deletion task for block ReplicaBeingWritten, 
blk_1073741825_1001, RBW
 731   getNumBytes() = 512
 732   getBytesOnDisk()  = 512
 733   getVisibleLength()= 512
 734   getVolume()   = 
/Users/manoj/work/cdh-hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
 735   getBlockFile()= 
/Users/manoj/work/cdh-hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-473099417-172.16.3.66-1475621545787/current/rbw/blk_1073741825
 736   bytesAcked=512
 737   bytesOnDisk=512
{noformat}

The core fix here is letting {{BlockManager#addStoredBlockUnderConstruction}} 
invoke {{addStoredBlock}} for all Finalized blocks and let addStoredBlocks 
decide on (which is already happening) follow up actions of invalidations 
removal of corrupt replicas. 

[~andrew.wang], [~eddyxu], would like to hear your further thoughts on this.



> BlockManager fails to store a good block for a datanode storage after it 
> reported a corrupt block — block replication stuck
> ---
>
> Key: HDFS-10819
> URL: https://issues.apache.org/jira/browse/HDFS-10819
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10819.001.patch
>
>
> TestDataNodeHotSwapVolumes occasionally fails in the unit test 
> testRemoveVolumeBeingWrittenForDatanode.  Data write pipeline can have issues 
> as there could be timeouts, data node not reachable etc, and in this test 
> case it was more of induced one as one of the volumes in a datanode is 
> removed while block write is in progress. Digging further in the logs, when 
> the problem happens in the write pipeline, the error recovery is not 
> happening as expected leading to block replication never catching up.
> Though this problem has same signature as in HDFS-10780, from the logs it 
> looks like the code paths taken are totally different and so the root cause 
> could be different as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10961) Flaky test TestSnapshotFileLength.testSnapshotfileLength

2016-10-04 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10961:


 Summary: Flaky test TestSnapshotFileLength.testSnapshotfileLength
 Key: HDFS-10961
 URL: https://issues.apache.org/jira/browse/HDFS-10961
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, hdfs-client
Affects Versions: 3.0
Reporter: Yongjun Zhang


Flaky test TestSnapshotFileLength.testSnapshotfileLength

{code}
Error Message
Unable to close file because the last block does not have enough number of 
replicas.
Stack Trace
java.io.IOException: Unable to close file because the last block does not have 
enough number of replicas.
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2630)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2592)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2546)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength(TestSnapshotFileLength.java:130)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8028) TestNNHandlesBlockReportPerStorage/TestNNHandlesCombinedBlockReport Failed after patched HDFS-7704

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547183#comment-15547183
 ] 

Hadoop QA commented on HDFS-8028:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-8028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12708414/HDFS-8028-v0.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 66f459e6ab6b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17010/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17010/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17010/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestNNHandlesBlockReportPerStorage/TestNNHandlesCombinedBlockReport Failed 
> after patched HDFS-7704
> --
>
> Key: HDFS-8028
> URL: https://issues.apache.org/jira/browse/HDFS-8028
> Project: Hadoop HDFS

[jira] [Commented] (HDFS-10826) The result of fsck should be CRITICAL when there are unrecoverable ec block groups.

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547167#comment-15547167
 ] 

Hadoop QA commented on HDFS-10826:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 514 unchanged - 1 fixed = 515 total (was 515) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 77m 
37s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831055/HDFS-10826.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bf8590116da0 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17009/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17009/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17009/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> The result of fsck should be CRITICAL when there are unrecoverable ec block 
> groups.
> ---
>
> Key: HDFS-10826
> URL: https://issues.apache.org/jira/browse/HDFS-10826
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>

[jira] [Commented] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice

2016-10-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547166#comment-15547166
 ] 

Xiao Chen commented on HDFS-10797:
--

Thanks Sean for the new revs! I think the synchronized block is ok, as I don't 
think that will be a critical path to impact du performance. Not sure what 
[~jingzhao]'s take is on this one.

Some minor comments, all in {{ContentSummaryComputationContext}}: 
- In {{nodeIncluded}}, we safeguard {{includedNodes}} in a synchronized block, 
but we also provide a {{getIncludedNodes}} method, which could potentially be 
updated by the caller. No real usage yet, but I just feel this a bit unsafe in 
general, maybe return a clone of it instead?
- {{resolveINodeReference}} can be private
- Need a {{}} to separate paragraphs in javadoc.

Test looks great.

> Disk usage summary of snapshots causes renamed blocks to get counted twice
> --
>
> Key: HDFS-10797
> URL: https://issues.apache.org/jira/browse/HDFS-10797
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, 
> HDFS-10797.003.patch, HDFS-10797.004.patch, HDFS-10797.005.patch, 
> HDFS-10797.006.patch, HDFS-10797.007.patch, HDFS-10797.008.patch, 
> HDFS-10797.009.patch
>
>
> DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how 
> much disk usage is used by a snapshot by tallying up the files in the 
> snapshot that have since been deleted (that way it won't overlap with regular 
> files whose disk usage is computed separately). However that is determined 
> from a diff that shows moved (to Trash or otherwise) or renamed files as a 
> deletion and a creation operation that may overlap with the list of blocks. 
> Only the deletion operation is taken into consideration, and this causes 
> those blocks to get represented twice in the disk usage tallying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10953) Remove single-rpc block reports.

2016-10-04 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547125#comment-15547125
 ] 

Konstantin Shvachko commented on HDFS-10953:


Hey [~shahrs87], thanks for volunteering.
One thing though, this should be dependent on HDFS-10301. It wouldn't make 
sense to collide the two. I'll re-link issues to emphasize this.

> Remove single-rpc block reports.
> 
>
> Key: HDFS-10953
> URL: https://issues.apache.org/jira/browse/HDFS-10953
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Konstantin Shvachko
>Assignee: Rushabh S Shah
>
> Currently a DataNode can send block reports as a single RPC for all storages 
> together or as multiple per storage RPCs. The choice is based on the size of 
> the report and is controlled by the configuration parameter 
> {{dfs.blockreport.split.threshold}}.
> As discussed in HDFS-10301, all block reports should be per-storage multiple 
> RPCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock

2016-10-04 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10896:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Move lock logging logic from FSNamesystem into FSNamesystemLock
> ---
>
> Key: HDFS-10896
> URL: https://issues.apache.org/jira/browse/HDFS-10896
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>  Labels: logging, namenode
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10896-branch-2.7.004.patch, HDFS-10896.000.patch, 
> HDFS-10896.001.patch, HDFS-10896.002.patch, HDFS-10896.003.patch, 
> HDFS-10896.004.patch
>
>
> There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this 
> subtask's story HDFS-10475) which are adding/improving logging/metrics around 
> the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, 
> which is polluting the namesystem with ThreadLocal variables, timing 
> counters, etc. which are only relevant to the lock itself and the number of 
> these increases as the logging/metrics become more sophisticated. It would be 
> best to move these all into FSNamesystemLock to keep the metrics/logging tied 
> directly to the item of interest. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock

2016-10-04 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10896:
-
Hadoop Flags: Reviewed

Thanks Erik! +1 on the branch-2.7 patch. I just verified reported test failures 
and committed to branch-2.7.

> Move lock logging logic from FSNamesystem into FSNamesystemLock
> ---
>
> Key: HDFS-10896
> URL: https://issues.apache.org/jira/browse/HDFS-10896
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>  Labels: logging, namenode
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10896-branch-2.7.004.patch, HDFS-10896.000.patch, 
> HDFS-10896.001.patch, HDFS-10896.002.patch, HDFS-10896.003.patch, 
> HDFS-10896.004.patch
>
>
> There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this 
> subtask's story HDFS-10475) which are adding/improving logging/metrics around 
> the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, 
> which is polluting the namesystem with ThreadLocal variables, timing 
> counters, etc. which are only relevant to the lock itself and the number of 
> these increases as the logging/metrics become more sophisticated. It would be 
> best to move these all into FSNamesystemLock to keep the metrics/logging tied 
> directly to the item of interest. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock

2016-10-04 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10896:
-
Fix Version/s: 2.7.4

> Move lock logging logic from FSNamesystem into FSNamesystemLock
> ---
>
> Key: HDFS-10896
> URL: https://issues.apache.org/jira/browse/HDFS-10896
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>  Labels: logging, namenode
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10896-branch-2.7.004.patch, HDFS-10896.000.patch, 
> HDFS-10896.001.patch, HDFS-10896.002.patch, HDFS-10896.003.patch, 
> HDFS-10896.004.patch
>
>
> There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this 
> subtask's story HDFS-10475) which are adding/improving logging/metrics around 
> the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, 
> which is polluting the namesystem with ThreadLocal variables, timing 
> counters, etc. which are only relevant to the lock itself and the number of 
> these increases as the logging/metrics become more sophisticated. It would be 
> best to move these all into FSNamesystemLock to keep the metrics/logging tied 
> directly to the item of interest. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10745) Directly resolve paths into INodesInPath

2016-10-04 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10745:
-
Attachment: HDFS-10745-branch-2.7.05.patch

Thanks [~shv] for taking a look! Attaching v5 patch to add {{checkAccess}} 
change. Will commit to branch-2.7 after Jenkins verification.

{{setStoragePolicy}} change is missing from trunk, I'll create a separate JIRA 
to address.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745-branch-2.7.01.patch, 
> HDFS-10745-branch-2.7.02.patch, HDFS-10745-branch-2.7.03.patch, 
> HDFS-10745-branch-2.7.04.patch, HDFS-10745-branch-2.7.05.patch, 
> HDFS-10745-branch-2.7.patch, HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-6532) Intermittent test failure org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

2016-10-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546919#comment-15546919
 ] 

Xiao Chen edited comment on HDFS-6532 at 10/4/16 11:29 PM:
---

Looked more into this. For failed cases, we see (copied from the 
'select-timeout' attachment):
{noformat}
2016-10-04 22:10:24,365 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1114)) - ==  
java.io.InterruptedIOException: Interruped while waiting for IO on channel 
java.nio.channels.SocketChannel[closed]. 28459 millis timeout left.
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:352)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2247)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1015)
{noformat}

And for the success cases, we see:
{noformat}
2016-10-04 15:13:15,271 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1116)) - ==  
java.io.IOException: Bad response ERROR for block 
BP-1283991366-172.16.3.181-1475619192335:blk_1073741825_1001 from datanode 
DatanodeInfoWithStorage[127.0.0.1:61321,DS-720243dd-55b6-49ef-ae55-4462e20260d5,DISK]
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1053)
{noformat}
and 
{noformat}
2016-10-04 15:13:16,084 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1116)) - ==  
java.io.EOFException: Premature EOF: no length prefix available
at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2249)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1017)
{noformat}
I printed the exception from [this 
line|https://github.com/apache/hadoop/blob/44f48ee96ee6b2a3909911c37bfddb0c963d5ffc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1149].

So in the failed cases, the responder is running in [this 
loop|https://github.com/apache/hadoop/blob/44f48ee96ee6b2a3909911c37bfddb0c963d5ffc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L708],
 until the following exception is thrown
{noformat}2016-10-04 22:36:40,403 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(941)) - Exception for 
BP-2046749708-172.17.0.1-1475620536833:blk_1073741826_1005
java.net.SocketTimeoutException: 6 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/127.0.0.1:42956 remote=/127.0.0.1:56324]
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:900)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
at java.lang.Thread.run(Thread.java:745)
2016-10-04 22:36:40,469 INFO  

[jira] [Commented] (HDFS-10826) The result of fsck should be CRITICAL when there are unrecoverable ec block groups.

2016-10-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547037#comment-15547037
 ] 

Wei-Chiu Chuang commented on HDFS-10826:


The patch looks good to me too.

> The result of fsck should be CRITICAL when there are unrecoverable ec block 
> groups.
> ---
>
> Key: HDFS-10826
> URL: https://issues.apache.org/jira/browse/HDFS-10826
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Attachments: HDFS-10826.2.patch, HDFS-10826.3.patch, 
> HDFS-10826.4.patch, HDFS-10826.5.patch, HDFS-10826.WIP.1.patch
>
>
> For RS-6-3, when there is one ec block group and
> 1) 0~3 out of 9 internal blocks are missing, the result of fsck is HEALTY.
> 2) 4~8 out of 9 internal blocks are missing, the result of fsck is HEALTY.
> {noformat}
> Erasure Coded Block Groups:
>  Total size:536870912 B
>  Total files:   1
>  Total block groups (validated):1 (avg. block group size 536870912 B)
>   
>   UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
>   
>  Minimally erasure-coded block groups:  0 (0.0 %)
>  Over-erasure-coded block groups:   0 (0.0 %)
>  Under-erasure-coded block groups:  1 (100.0 %)
>  Unsatisfactory placement block groups: 0 (0.0 %)
>  Default ecPolicy:  RS-DEFAULT-6-3-64k
>  Average block group size:  5.0
>  Missing block groups:  0
>  Corrupt block groups:  0
>  Missing internal blocks:   4 (44.43 %)
> FSCK ended at Wed Aug 31 13:42:05 JST 2016 in 4 milliseconds
> The filesystem under path '/' is HEALTHY
> {noformat}
> 3) 9 out of 9 internal blocks are missing, the result of fsck is CRITICAL. 
> (Because it is regarded as a missing block group.)
> In case 2), the result should be CRITICAL since the ec block group is 
> unrecoverable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10960) TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error verification after volume remove

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547035#comment-15547035
 ] 

Hadoop QA commented on HDFS-10960:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 14 unchanged - 1 fixed = 14 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
57s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831625/HDFS-10960.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 04c963d09245 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17008/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17008/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error 
> verification after volume remove
> 
>
> Key: HDFS-10960
> URL: https://issues.apache.org/jira/browse/HDFS-10960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha2
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> 

[jira] [Commented] (HDFS-10826) The result of fsck should be CRITICAL when there are unrecoverable ec block groups.

2016-10-04 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546996#comment-15546996
 ] 

Jing Zhao commented on HDFS-10826:
--

The latest patch looks good to me. To do further code refactoring in HDFS-10933 
also sounds good to me. Do you have further comments, [~jojochuang]?

> The result of fsck should be CRITICAL when there are unrecoverable ec block 
> groups.
> ---
>
> Key: HDFS-10826
> URL: https://issues.apache.org/jira/browse/HDFS-10826
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Attachments: HDFS-10826.2.patch, HDFS-10826.3.patch, 
> HDFS-10826.4.patch, HDFS-10826.5.patch, HDFS-10826.WIP.1.patch
>
>
> For RS-6-3, when there is one ec block group and
> 1) 0~3 out of 9 internal blocks are missing, the result of fsck is HEALTY.
> 2) 4~8 out of 9 internal blocks are missing, the result of fsck is HEALTY.
> {noformat}
> Erasure Coded Block Groups:
>  Total size:536870912 B
>  Total files:   1
>  Total block groups (validated):1 (avg. block group size 536870912 B)
>   
>   UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
>   
>  Minimally erasure-coded block groups:  0 (0.0 %)
>  Over-erasure-coded block groups:   0 (0.0 %)
>  Under-erasure-coded block groups:  1 (100.0 %)
>  Unsatisfactory placement block groups: 0 (0.0 %)
>  Default ecPolicy:  RS-DEFAULT-6-3-64k
>  Average block group size:  5.0
>  Missing block groups:  0
>  Corrupt block groups:  0
>  Missing internal blocks:   4 (44.43 %)
> FSCK ended at Wed Aug 31 13:42:05 JST 2016 in 4 milliseconds
> The filesystem under path '/' is HEALTHY
> {noformat}
> 3) 9 out of 9 internal blocks are missing, the result of fsck is CRITICAL. 
> (Because it is regarded as a missing block group.)
> In case 2), the result should be CRITICAL since the ec block group is 
> unrecoverable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10922:
---
Attachment: HDFS-10922.05.patch

> Adding additional unit tests for Trash
> --
>
> Key: HDFS-10922
> URL: https://issues.apache.org/jira/browse/HDFS-10922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HDFS-10922.02.patch, HDFS-10922.03.patch, 
> HDFS-10922.04.patch, HDFS-10922.05.patch
>
>
> This ticket is opened to track adding unit tests for Trash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8028) TestNNHandlesBlockReportPerStorage/TestNNHandlesCombinedBlockReport Failed after patched HDFS-7704

2016-10-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546938#comment-15546938
 ] 

Wei-Chiu Chuang commented on HDFS-8028:
---

I think the patch makes sense to me. However, it may be better to add this 
extra wait into sendBlockReports, because all tests in the file need it.

> TestNNHandlesBlockReportPerStorage/TestNNHandlesCombinedBlockReport Failed 
> after patched HDFS-7704
> --
>
> Key: HDFS-8028
> URL: https://issues.apache.org/jira/browse/HDFS-8028
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.0
>Reporter: hongyu bi
>Assignee: hongyu bi
>Priority: Minor
> Attachments: HDFS-8028-v0.patch
>
>
> HDFS-7704 makes BadBlockReport asynchronously however 
> BlockReportTestBase#blockreport_02 doesn't wait for a while after blockreport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8028) TestNNHandlesBlockReportPerStorage/TestNNHandlesCombinedBlockReport Failed after patched HDFS-7704

2016-10-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-8028:
--
Status: Patch Available  (was: Open)

Submit the patch to see if it still applies.

> TestNNHandlesBlockReportPerStorage/TestNNHandlesCombinedBlockReport Failed 
> after patched HDFS-7704
> --
>
> Key: HDFS-8028
> URL: https://issues.apache.org/jira/browse/HDFS-8028
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.0
>Reporter: hongyu bi
>Assignee: hongyu bi
>Priority: Minor
> Attachments: HDFS-8028-v0.patch
>
>
> HDFS-7704 makes BadBlockReport asynchronously however 
> BlockReportTestBase#blockreport_02 doesn't wait for a while after blockreport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6532) Intermittent test failure org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

2016-10-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546919#comment-15546919
 ] 

Xiao Chen commented on HDFS-6532:
-

Looked more into this. For failed cases, we see (copied from the 
'select-timeout' attachment):
{noformat}
2016-10-04 22:10:24,365 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1114)) - ==  
java.io.InterruptedIOException: Interruped while waiting for IO on channel 
java.nio.channels.SocketChannel[closed]. 28459 millis timeout left.
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:352)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2247)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1015)
{noformat}

And for the success cases, we see:
{noformat}
2016-10-04 15:13:15,271 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1116)) - ==  
java.io.IOException: Bad response ERROR for block 
BP-1283991366-172.16.3.181-1475619192335:blk_1073741825_1001 from datanode 
DatanodeInfoWithStorage[127.0.0.1:61321,DS-720243dd-55b6-49ef-ae55-4462e20260d5,DISK]
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1053)
{noformat}
and 
{noformat}
2016-10-04 15:13:16,084 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1116)) - ==  
java.io.EOFException: Premature EOF: no length prefix available
at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2249)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1017)
{noformat}
I printed the exception from [this 
line|https://github.com/apache/hadoop/blob/44f48ee96ee6b2a3909911c37bfddb0c963d5ffc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1149].

So in the failed cases, the responder is running in [this 
loop|https://github.com/apache/hadoop/blob/44f48ee96ee6b2a3909911c37bfddb0c963d5ffc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L708],
 until the following exception is thrown
{noformat}2016-10-04 22:36:40,403 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(941)) - Exception for 
BP-2046749708-172.17.0.1-1475620536833:blk_1073741826_1005
java.net.SocketTimeoutException: 6 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/127.0.0.1:42956 remote=/127.0.0.1:56324]
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:900)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
at java.lang.Thread.run(Thread.java:745)
2016-10-04 22:36:40,469 INFO  hdfs.DFSOutputStream 
(DFSOutputStream.java:run(1116)) - ==  

[jira] [Commented] (HDFS-6532) Intermittent test failure org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546854#comment-15546854
 ] 

Hadoop QA commented on HDFS-6532:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-6532 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831614/HDFS-6532.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 44c223b96a4a 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17007/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17007/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17007/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Intermittent test failure 
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> --
>
> Key: HDFS-6532
> URL: 

[jira] [Commented] (HDFS-10925) Cache symlinkString in INodeSymlink

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546797#comment-15546797
 ] 

Kihwal Lee commented on HDFS-10925:
---

I think what Daryn meant by
bq.  The latter is the dominate use case so I'd lean towards retain the string 
and extract the byte[] as necessary.
is to get rid of {{byte[] symlink}} and whip one up from the string when 
{{getSymlink()}} is called.

> Cache symlinkString in INodeSymlink
> ---
>
> Key: HDFS-10925
> URL: https://issues.apache.org/jira/browse/HDFS-10925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10925.001.patch, HDFS-10925.002.patch
>
>
> In {{INodeSymlink}}'s construct method, it will transfer the input symlink 
> string to a byte array. If we want to invoke 
> {{INodeSymlink#getSymlinkString}}, it will transfer the byte array to the 
> string again. Since we don't cache symlinkString  here, it will do the 
> {{DFSUtil.bytes2String}} method every time. It seems not efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10955) Pass IIP for FSDirAttr methods

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546781#comment-15546781
 ] 

Kihwal Lee edited comment on HDFS-10955 at 10/4/16 9:54 PM:


It won't compile with the patch. It looks like the {{setTimes()}} call in 
{{getBlockLocations()}} wasn't updated. 


was (Author: kihwal):
It won't compile with the patch. It looks like the {{setTime()}} call in 
{{getBlockLocations()}} wasn't updated. 

> Pass IIP for FSDirAttr methods
> --
>
> Key: HDFS-10955
> URL: https://issues.apache.org/jira/browse/HDFS-10955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10955.patch
>
>
> Methods should always use the resolved IIP instead of re-solving the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10955) Pass IIP for FSDirAttr methods

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546781#comment-15546781
 ] 

Kihwal Lee commented on HDFS-10955:
---

It won't compile with the patch. It looks like the {{setTime()}} call in 
{{getBlockLocations()}} wasn't updated. 

> Pass IIP for FSDirAttr methods
> --
>
> Key: HDFS-10955
> URL: https://issues.apache.org/jira/browse/HDFS-10955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10955.patch
>
>
> Methods should always use the resolved IIP instead of re-solving the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10960) TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error verification after volume remove

2016-10-04 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10960:
--
Attachment: HDFS-10960.01.patch

Attached v01 patch which addresses the problem as proposed in the previous 
comment. [~eddyxu], can you please review the patch ?

> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error 
> verification after volume remove
> 
>
> Key: HDFS-10960
> URL: https://issues.apache.org/jira/browse/HDFS-10960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha2
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10960.01.patch
>
>
> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails occasionally in 
> the following verification.
> {code}
>   700 // If an IOException thrown from BlockReceiver#run, it triggers
>   701 // DataNode#checkDiskError(). So we can test whether 
> checkDiskError() is called,
>   702 // to see whether there is IOException in BlockReceiver#run().
>   703 assertEquals(lastTimeDiskErrorCheck, dn.getLastDiskErrorCheck());
>   704 
> {code}
> {noformat}
> Error Message
> expected:<0> but was:<6498109>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<6498109>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:703)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:620)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10960) TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error verification after volume remove

2016-10-04 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10960:
--
Status: Patch Available  (was: Open)

> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error 
> verification after volume remove
> 
>
> Key: HDFS-10960
> URL: https://issues.apache.org/jira/browse/HDFS-10960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha2
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10960.01.patch
>
>
> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails occasionally in 
> the following verification.
> {code}
>   700 // If an IOException thrown from BlockReceiver#run, it triggers
>   701 // DataNode#checkDiskError(). So we can test whether 
> checkDiskError() is called,
>   702 // to see whether there is IOException in BlockReceiver#run().
>   703 assertEquals(lastTimeDiskErrorCheck, dn.getLastDiskErrorCheck());
>   704 
> {code}
> {noformat}
> Error Message
> expected:<0> but was:<6498109>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<6498109>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:703)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:620)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10939) Reduce performance penalty of encryption zones

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546748#comment-15546748
 ] 

Kihwal Lee commented on HDFS-10939:
---

In {{startFileInt()}}, {{unlock()}} is done in {{getEncryptionKeyInfo()}} and 
it is reacquired in {{startFileInt()}}. Is this asymmetry necessary? It looks 
like {{getEncryptionKeyInfo()}} is supposed to be called with the write lock 
anyway. Wouldn't it make more sense to make it reacquire the lock in its 
finally block?  Sure, it might incur extra lock-unlock if something fails: If 
{{generateEncryptedDataEncryptionKey()}} fails, the lock will be reacquired 
from {{getEncryptionKeyInfo()}}. Then release immediately in the finally block 
of {{startFileInt()}}.  Still I think it is better to organize locking this way 
at the price of extra locking on rare failure cases.  Correctness wise, the 
patch seems fine.

> Reduce performance penalty of encryption zones
> --
>
> Key: HDFS-10939
> URL: https://issues.apache.org/jira/browse/HDFS-10939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.7
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10939.1.patch, HDFS-10939.patch
>
>
> The encryption zone APIs should be optimized to extensively use IIPs to 
> eliminate path resolutions.  The performance penalties incurred by common 
> operations like creation of file statuses may be reduced by more extensive 
> short-circuiting of EZ lookups when no EZs exist.  All file creates should 
> not be subjected to the multi-stage locking performance penalty required only 
> for EDEK generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-10-04 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10629:
--
Status: In Progress  (was: Patch Available)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-10-04 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10629:
--
Status: Patch Available  (was: In Progress)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2016-10-04 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546681#comment-15546681
 ] 

Inigo Goiri commented on HDFS-10881:


As this is just a stub for the following patches, I think that unit tests don't 
make much sense here as there is no proper functionality.

+1


> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546669#comment-15546669
 ] 

Hadoop QA commented on HDFS-10881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
42s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831602/HDFS-10881-HDFS-10467-006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5aa9ab256816 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 736d33c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17006/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17006/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> 

[jira] [Commented] (HDFS-10960) TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error verification after volume remove

2016-10-04 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546654#comment-15546654
 ] 

Manoj Govindassamy commented on HDFS-10960:
---


Looking at the code, remove volumes at DataNode can potentially interrupt 
BlockReceiver and if the BlockReceiver happens to be in some IO operations like 
flushing or setting channel position for the new checksum then it can throw 
IOException. {{BlockReceiver}} on getting IOexception, starts a thread to check 
for disk errors. 

TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten verification fails if 
the DataNode ever started a disk error check thread. This verification doesn't 
seem to be fruitful as we already have another verification for checking the 
block replication factor. So, the proposal here is to replace this not so 
useful verification with another verification to check for if the disk removal 
happened successfully and if the replication factor of the block caught up even 
after the volume removal.

> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error 
> verification after volume remove
> 
>
> Key: HDFS-10960
> URL: https://issues.apache.org/jira/browse/HDFS-10960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha2
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
>
> TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails occasionally in 
> the following verification.
> {code}
>   700 // If an IOException thrown from BlockReceiver#run, it triggers
>   701 // DataNode#checkDiskError(). So we can test whether 
> checkDiskError() is called,
>   702 // to see whether there is IOException in BlockReceiver#run().
>   703 assertEquals(lastTimeDiskErrorCheck, dn.getLastDiskErrorCheck());
>   704 
> {code}
> {noformat}
> Error Message
> expected:<0> but was:<6498109>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<6498109>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:703)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:620)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-10-04 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546645#comment-15546645
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

The test failure seems unrelated. It passes locally.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.015.patch, HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10960) TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails at disk error verification after volume remove

2016-10-04 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-10960:
-

 Summary: TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten 
fails at disk error verification after volume remove
 Key: HDFS-10960
 URL: https://issues.apache.org/jira/browse/HDFS-10960
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha2
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy
Priority: Minor


TestDataNodeHotSwapVolumes#testRemoveVolumeBeingWritten fails occasionally in 
the following verification.

{code}
  700 // If an IOException thrown from BlockReceiver#run, it triggers
  701 // DataNode#checkDiskError(). So we can test whether checkDiskError() 
is called,
  702 // to see whether there is IOException in BlockReceiver#run().
  703 assertEquals(lastTimeDiskErrorCheck, dn.getLastDiskErrorCheck());
  704 
{code}

{noformat}
Error Message

expected:<0> but was:<6498109>
Stacktrace

java.lang.AssertionError: expected:<0> but was:<6498109>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:703)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:620)

{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546626#comment-15546626
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 372 unchanged - 7 fixed = 372 total (was 379) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10301 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831599/HDFS-10301.015.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1540124de803 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 88b9444 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17005/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17005/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17005/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: 

[jira] [Updated] (HDFS-6532) Intermittent test failure org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

2016-10-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-6532:

Attachment: HDFS-6532.002.patch
TEST-org.apache.hadoop.hdfs.TestCrcCorruption-select_timeout.xml

I have looked more into this, and believe to found the proof of what caused the 
test to fail.

Attaching a failure log with my self-added verbose logging. As you can see, the 
{{select}} is blocking the thread, and its timeout is 60 seconds. I'm not sure 
whether the 50-second test timeout is intentional, but after upping it to 
180-second, the test failure rate went from 3/100 to 0/1000.

> Intermittent test failure 
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> --
>
> Key: HDFS-6532
> URL: https://issues.apache.org/jira/browse/HDFS-6532
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yiqun Lin
> Attachments: HDFS-6532.001.patch, HDFS-6532.002.patch, 
> PreCommit-HDFS-Build #16770 test - testCorruptionDuringWrt [Jenkins].pdf, 
> TEST-org.apache.hadoop.hdfs.TestCrcCorruption-select_timeout.xml, 
> TEST-org.apache.hadoop.hdfs.TestCrcCorruption.xml
>
>
> Per https://builds.apache.org/job/Hadoop-Hdfs-trunk/1774/testReport, we had 
> the following failure. Local rerun is successful
> {code}
> Regression
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> Failing for the past 1 build (Since Failed#1774 )
> Took 50 sec.
> Error Message
> test timed out after 5 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 5 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2024)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2008)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2107)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:98)
>   at 
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:133)
> {code}
> See relevant exceptions in log
> {code}
> 2014-06-14 11:56:15,283 WARN  datanode.DataNode 
> (BlockReceiver.java:verifyChunks(404)) - Checksum error in block 
> BP-1675558312-67.195.138.30-1402746971712:blk_1073741825_1001 from 
> /127.0.0.1:41708
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_-1139495951_8 at 64512 exp: 1379611785 got: -12163112
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:353)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:284)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:402)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:537)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:734)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234)
>   at java.lang.Thread.run(Thread.java:662)
> 2014-06-14 11:56:15,285 WARN  datanode.DataNode 
> (BlockReceiver.java:run(1207)) - IOException in BlockReceiver.run(): 
> java.io.IOException: Shutting down writer and responder due to a checksum 
> error in received data. The error response has been sent upstream.
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1278)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1199)
>   at java.lang.Thread.run(Thread.java:662)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10956:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8.

> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10948) HDFS setOwner no-op should not generate an edit log transaction

2016-10-04 Thread Karthik Palaniappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palaniappan updated HDFS-10948:
---
Description: 
The HDFS setOwner RPC takes a path, optional username, and optional groupname 
(see ClientNamenodeProtocol.proto). If neither username nor groupname are set, 
the RPC is a no-op. However, an entry in the edit log is still committed, and 
when retrieved through getEditsFromTxid, both username and groupname are set to 
the empty string.

This does not match the behavior of the setTimes RPC. There atime and mtime are 
both required, and you can set "-1" to indicated "don't change the time". If 
both are set to "-1", the RPC is a no-op, and appropriately no edit log 
transaction is created.

Here's an example of an (untested) patch to fix the issue:

{code}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 3dc1c30..0728bc5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -77,6 +77,7 @@ static HdfsFileStatus setOwner(
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
+boolean changed = false;
 FSPermissionChecker pc = fsd.getPermissionChecker();
 INodesInPath iip;
 fsd.writeLock();
@@ -92,11 +93,13 @@ static HdfsFileStatus setOwner(
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  changed = unprotectedSetOwner(fsd, src, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+if (changed) {
+  fsd.getEditLog().logSetOwner(src, username, group);
+}
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -287,22 +290,27 @@ static void unprotectedSetPermission(
 inode.setPermission(permissions, snapshotId);
   }
 
-  static void unprotectedSetOwner(
+  static boolean unprotectedSetOwner(
   FSDirectory fsd, String src, String username, String groupname)
   throws FileNotFoundException, UnresolvedLinkException,
   QuotaExceededException, SnapshotAccessControlException {
 assert fsd.hasWriteLock();
+boolean status = false;
 final INodesInPath inodesInPath = fsd.getINodesInPath4Write(src, true);
 INode inode = inodesInPath.getLastINode();
 if (inode == null) {
   throw new FileNotFoundException("File does not exist: " + src);
 }
 if (username != null) {
+  status = true;
   inode = inode.setUser(username, inodesInPath.getLatestSnapshotId());
 }
 if (groupname != null) {
+  status = true;
   inode.setGroup(groupname, inodesInPath.getLatestSnapshotId());
 }
+
+return status;
   }
 
   static boolean setTimes(
{code}

  was:
The HDFS setOwner RPC takes a path, optional username, and optional groupname 
(see ClientNamenodeProtocol.proto). If neither username nor groupname are set, 
the RPC is a no-op. However, an entry in the edit log is still committed, and 
when retrieved through getEditsFromTxid, both username and groupname are set to 
the empty string.

This does not match the behavior of the setTimes RPC. There atime and mtime are 
both required, and you can set "-1" to indicated "don't change the time". If 
both are set to "-1", the RPC is a no-op, and appropriately no edit log 
transaction is created.

Here's an example of an (untested) patch to fix the issue:

```
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 3dc1c30..0728bc5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -77,6 +77,7 @@ static HdfsFileStatus setOwner(
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
+boolean changed = false;
 FSPermissionChecker pc = fsd.getPermissionChecker();
 INodesInPath iip;
 fsd.writeLock();
@@ -92,11 +93,13 @@ static HdfsFileStatus setOwner(
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  changed = unprotectedSetOwner(fsd, src, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+if (changed) {
+  

[jira] [Commented] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546553#comment-15546553
 ] 

Hudson commented on HDFS-10956:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10541 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10541/])
HDFS-10956. Remove rename/delete performance penalty when not using (kihwal: 
rev 44f48ee96ee6b2a3909911c37bfddb0c963d5ffc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java


> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546493#comment-15546493
 ] 

Kihwal Lee commented on HDFS-10956:
---

+1 looks straight-forward.

> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10957) Retire BKJM from trunk

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546475#comment-15546475
 ] 

Kihwal Lee commented on HDFS-10957:
---

{{hadoop-hdfs-project/pom.xml}} needs to be updated.

> Retire BKJM from trunk
> --
>
> Key: HDFS-10957
> URL: https://issues.apache.org/jira/browse/HDFS-10957
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10957-01.patch
>
>
> Based on discussions done in mailing list 
> [here|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3c1288296033.5942327.1469727453010.javamail.ya...@mail.yahoo.com%3E]
>  BKJM is no longer required as QJM supports the required functionality with 
> minimum deployment overhead.
> This Jira is for tracking purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-10-04 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10629:
--
Attachment: (was: HDFS-10881-HDFS-10467-006.patch)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10629) Federation Router

2016-10-04 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10629:
--
Comment: was deleted

(was: Updated patch to reflect feedback.  A few minor documentation changes and 
improvements.)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10881) Federation State Store Driver API

2016-10-04 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10881:
--
Attachment: HDFS-10881-HDFS-10467-006.patch

Updating patch to reflect feedback.  Minor documentation changes.

> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-10-04 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi updated HDFS-10301:
-
Attachment: HDFS-10301.015.patch

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.015.patch, HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-10-04 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546390#comment-15546390
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

Patch 15 has the changes mentioned in 
https://issues.apache.org/jira/browse/HDFS-10301?focusedCommentId=15536676=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15536676.
 Kindly review.

??It does not solve the race between a timed out BR and the repeating BR in 
multi-RPC BR case.??
When there is a race, the per-storage BRs that arrive after the removal of the 
node lease would not be processed. I think that is okay. BR retransmissions are 
handled by the underlying RPC layer. The same RPC request is retried as per the 
specified Retry policy. Since these retransmitted BRs are identical, it is 
sufficient if we process all the per-storage BRs once. It seems okay to ignore 
the subsequent retransmitted BRs from the same node once {{curRpc + 1 == 
totalRpcs}} is satisfied. Does that sound reasonable?

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-4169) Add per-disk latency metrics to DataNode

2016-10-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-4169:


Assignee: Xiaoyu Yao

> Add per-disk latency metrics to DataNode
> 
>
> Key: HDFS-4169
> URL: https://issues.apache.org/jira/browse/HDFS-4169
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Xiaoyu Yao
>
> Currently, if one of the drives on the DataNode is slow, it's hard to 
> determine what the issue is. This can happen due to a failing disk, bad 
> controller, etc. It would be preferable to expose per-drive MXBeans (or 
> tagged metrics) with latency statistics about how long reads/writes are 
> taking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4169) Add per-disk latency metrics to DataNode

2016-10-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-4169:
-
Description: Currently, if one of the drives on the DataNode is slow, it's 
hard to determine what the issue is. This can happen due to a failing disk, bad 
controller, etc. It would be preferable to expose per-drive metrics/jmx with 
latency statistics about how long reads/writes are taking.  (was: Currently, if 
one of the drives on the DataNode is slow, it's hard to determine what the 
issue is. This can happen due to a failing disk, bad controller, etc. It would 
be preferable to expose per-drive MXBeans (or tagged metrics) with latency 
statistics about how long reads/writes are taking.)

> Add per-disk latency metrics to DataNode
> 
>
> Key: HDFS-4169
> URL: https://issues.apache.org/jira/browse/HDFS-4169
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Xiaoyu Yao
>
> Currently, if one of the drives on the DataNode is slow, it's hard to 
> determine what the issue is. This can happen due to a failing disk, bad 
> controller, etc. It would be preferable to expose per-drive metrics/jmx with 
> latency statistics about how long reads/writes are taking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-10-04 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10629:
--
Attachment: HDFS-10881-HDFS-10467-006.patch

Updated patch to reflect feedback.  A few minor documentation changes and 
improvements.

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10881-HDFS-10467-006.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10959) Adding per disk IO statistics in DataNode.

2016-10-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10959:
--
Description: This ticket is opened to support per disk IO statistics in 
DataNode based on HDFS-10930. The statistics added will make it easier to 
implement HDFS-4169 "Add per-disk latency metrics to DataNode".  (was: This 
ticket is opened to support per disk IO statistics in DataNode based on 
HDFS-10930. The metrics added should cover the open ticket from HDFS-4169 "Add 
per-disk latency metrics to DataNode".)

> Adding per disk IO statistics in DataNode.
> --
>
> Key: HDFS-10959
> URL: https://issues.apache.org/jira/browse/HDFS-10959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> This ticket is opened to support per disk IO statistics in DataNode based on 
> HDFS-10930. The statistics added will make it easier to implement HDFS-4169 
> "Add per-disk latency metrics to DataNode".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10959) Adding per disk IO statistics in DataNode.

2016-10-04 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10959:
-

 Summary: Adding per disk IO statistics in DataNode.
 Key: HDFS-10959
 URL: https://issues.apache.org/jira/browse/HDFS-10959
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to support per disk IO statistics in DataNode based on 
HDFS-10930. The metrics added should cover the open ticket from HDFS-4169 "Add 
per-disk latency metrics to DataNode".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10958) Add instrumentation hooks around Datanode disk IO

2016-10-04 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10958:
-

 Summary: Add instrumentation hooks around Datanode disk IO
 Key: HDFS-10958
 URL: https://issues.apache.org/jira/browse/HDFS-10958
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to add instrumentation hooks around Datanode disk IO 
based on refactor work from HDFS-10930.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-10-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Summary: Refactor: Wrap Datanode IO related operations  (was: Refactor 
Datanode IO operations into a single class)

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10930.01.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor Datanode IO operations into a single class

2016-10-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: HDFS-10930.01.patch

Attach a initial patch that refactors IO related operations into 
DatanodeIO.java class. I will open follow up JIRAs on instrumentation, per 
volume IO related metrics collection, etc based on that.

> Refactor Datanode IO operations into a single class
> ---
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10930.01.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10957) Retire BKJM from trunk

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546238#comment-15546238
 ] 

Hadoop QA commented on HDFS-10957:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
44s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m  
8s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10957 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831564/HDFS-10957-01.patch |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  findbugs  checkstyle  cc  |
| uname | Linux 4fc607249805 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 382307c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17002/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17002/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retire BKJM from trunk
> --
>
> Key: HDFS-10957
> URL: https://issues.apache.org/jira/browse/HDFS-10957
> Project: Hadoop HDFS
>  Issue Type: Task
>

[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-10-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546235#comment-15546235
 ] 

Wei-Chiu Chuang commented on HDFS-10429:


Hello [~aplusplus]
Thanks for submitting a new patch. I have a few questions regarding this patch. 

* First of all, should closeThreads(false); be called before 
try...completeFile? Pre-HDFS-9812 code had the method call before that.
* Would you also like to contribute a unit test? Or is there a way to reproduce 
it? I honestly can't reproduce the warning whether in a unit test or a real 
cluster.
* Does DFSStripedOutputStream suffer from the same annoying warning? It seems 
the logic are mostly the same. [~jingzhao] would you mind to comment on this?

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, HDFS-10429.2.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10948) HDFS setOwner no-op should not generate an edit log transaction

2016-10-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546230#comment-15546230
 ] 

John Zhuge commented on HDFS-10948:
---

We typically encourage the 3rdparty client libraries to model DFSClient 
behavior unless a crashing case is discovered. It seems pretty simple to add 
the checking to Snakebite.

Would you still like a patch on NameNode? I will be happy to code review. A 
little tricky to add a unit test.

> HDFS setOwner no-op should not generate an edit log transaction
> ---
>
> Key: HDFS-10948
> URL: https://issues.apache.org/jira/browse/HDFS-10948
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Karthik Palaniappan
>Assignee: John Zhuge
>Priority: Minor
> Attachments: testNullUserNullGroup.patch
>
>
> The HDFS setOwner RPC takes a path, optional username, and optional groupname 
> (see ClientNamenodeProtocol.proto). If neither username nor groupname are 
> set, the RPC is a no-op. However, an entry in the edit log is still 
> committed, and when retrieved through getEditsFromTxid, both username and 
> groupname are set to the empty string.
> This does not match the behavior of the setTimes RPC. There atime and mtime 
> are both required, and you can set "-1" to indicated "don't change the time". 
> If both are set to "-1", the RPC is a no-op, and appropriately no edit log 
> transaction is created.
> Here's an example of an (untested) patch to fix the issue:
> ```
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
> index 3dc1c30..0728bc5 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
> @@ -77,6 +77,7 @@ static HdfsFileStatus setOwner(
>  if (FSDirectory.isExactReservedName(src)) {
>throw new InvalidPathException(src);
>  }
> +boolean changed = false;
>  FSPermissionChecker pc = fsd.getPermissionChecker();
>  INodesInPath iip;
>  fsd.writeLock();
> @@ -92,11 +93,13 @@ static HdfsFileStatus setOwner(
>throw new AccessControlException("User does not belong to " + 
> group);
>  }
>}
> -  unprotectedSetOwner(fsd, src, username, group);
> +  changed = unprotectedSetOwner(fsd, src, username, group);
>  } finally {
>fsd.writeUnlock();
>  }
> -fsd.getEditLog().logSetOwner(src, username, group);
> +if (changed) {
> +  fsd.getEditLog().logSetOwner(src, username, group);
> +}
>  return fsd.getAuditFileInfo(iip);
>}
>  
> @@ -287,22 +290,27 @@ static void unprotectedSetPermission(
>  inode.setPermission(permissions, snapshotId);
>}
>  
> -  static void unprotectedSetOwner(
> +  static boolean unprotectedSetOwner(
>FSDirectory fsd, String src, String username, String groupname)
>throws FileNotFoundException, UnresolvedLinkException,
>QuotaExceededException, SnapshotAccessControlException {
>  assert fsd.hasWriteLock();
> +boolean status = false;
>  final INodesInPath inodesInPath = fsd.getINodesInPath4Write(src, true);
>  INode inode = inodesInPath.getLastINode();
>  if (inode == null) {
>throw new FileNotFoundException("File does not exist: " + src);
>  }
>  if (username != null) {
> +  status = true;
>inode = inode.setUser(username, inodesInPath.getLatestSnapshotId());
>  }
>  if (groupname != null) {
> +  status = true;
>inode.setGroup(groupname, inodesInPath.getLatestSnapshotId());
>  }
> +
> +return status;
>}
>  
>static boolean setTimes(
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546170#comment-15546170
 ] 

Hadoop QA commented on HDFS-10956:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 40 unchanged - 1 fixed = 40 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831563/HDFS-10956.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b1f180823663 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 382307c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17001/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17001/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17001/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue 

[jira] [Commented] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546148#comment-15546148
 ] 

Hadoop QA commented on HDFS-10922:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 27s{color} | {color:orange} root: The patch generated 2 new + 132 unchanged 
- 0 fixed = 134 total (was 132) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 
39s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831558/HDFS-10922.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 673204f3189d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ef7f06f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17000/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17000/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17000/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |

[jira] [Commented] (HDFS-10609) Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546144#comment-15546144
 ] 

Hadoop QA commented on HDFS-10609:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
52s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
24s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 583 unchanged - 5 fixed = 586 total (was 588) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3597 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  1m 
27s{color} | {color:red} The patch 113 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | 

[jira] [Commented] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546139#comment-15546139
 ] 

Hadoop QA commented on HDFS-10922:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 26s{color} | {color:orange} root: The patch generated 2 new + 132 unchanged 
- 0 fixed = 134 total (was 132) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 52s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestBlockStoragePolicy |
| Timed out junit tests | org.apache.hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831557/HDFS-10922.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e963df38326e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ef7f06f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16999/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16999/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HDFS-10955) Pass IIP for FSDirAttr methods

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546115#comment-15546115
 ] 

Hadoop QA commented on HDFS-10955:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 91 unchanged - 10 fixed = 94 total (was 101) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831545/HDFS-10955.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c017127681f8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 382307c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17004/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17004/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17004/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17004/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17004/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546055#comment-15546055
 ] 

Hadoop QA commented on HDFS-10429:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10429 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831572/HDFS-10429.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 02dbeb38c677 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 382307c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17003/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17003/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, 

[jira] [Commented] (HDFS-10948) HDFS setOwner no-op should not generate an edit log transaction

2016-10-04 Thread Karthik Palaniappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546041#comment-15546041
 ] 

Karthik Palaniappan commented on HDFS-10948:


[~jzhuge] Thanks for the quick response! I wasn't using the Java HDFS client. I 
was using Snakebite: 
https://github.com/spotify/snakebite/blob/master/snakebite/client.py, so I'm 
using the protobufs directly. I was doing this:

{code}
>>> from snakebite.client import Client
>>> client = Client('172.20.0.2', 8020, use_trash=False)
>>> client.chown(['/'], ":")

>>> print [x for x in _]
[{'path': '/', 'result': True}]
{code}

Snakebite does not set either owner or group (so they're both null).

> HDFS setOwner no-op should not generate an edit log transaction
> ---
>
> Key: HDFS-10948
> URL: https://issues.apache.org/jira/browse/HDFS-10948
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Karthik Palaniappan
>Assignee: John Zhuge
>Priority: Minor
> Attachments: testNullUserNullGroup.patch
>
>
> The HDFS setOwner RPC takes a path, optional username, and optional groupname 
> (see ClientNamenodeProtocol.proto). If neither username nor groupname are 
> set, the RPC is a no-op. However, an entry in the edit log is still 
> committed, and when retrieved through getEditsFromTxid, both username and 
> groupname are set to the empty string.
> This does not match the behavior of the setTimes RPC. There atime and mtime 
> are both required, and you can set "-1" to indicated "don't change the time". 
> If both are set to "-1", the RPC is a no-op, and appropriately no edit log 
> transaction is created.
> Here's an example of an (untested) patch to fix the issue:
> ```
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
> index 3dc1c30..0728bc5 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
> @@ -77,6 +77,7 @@ static HdfsFileStatus setOwner(
>  if (FSDirectory.isExactReservedName(src)) {
>throw new InvalidPathException(src);
>  }
> +boolean changed = false;
>  FSPermissionChecker pc = fsd.getPermissionChecker();
>  INodesInPath iip;
>  fsd.writeLock();
> @@ -92,11 +93,13 @@ static HdfsFileStatus setOwner(
>throw new AccessControlException("User does not belong to " + 
> group);
>  }
>}
> -  unprotectedSetOwner(fsd, src, username, group);
> +  changed = unprotectedSetOwner(fsd, src, username, group);
>  } finally {
>fsd.writeUnlock();
>  }
> -fsd.getEditLog().logSetOwner(src, username, group);
> +if (changed) {
> +  fsd.getEditLog().logSetOwner(src, username, group);
> +}
>  return fsd.getAuditFileInfo(iip);
>}
>  
> @@ -287,22 +290,27 @@ static void unprotectedSetPermission(
>  inode.setPermission(permissions, snapshotId);
>}
>  
> -  static void unprotectedSetOwner(
> +  static boolean unprotectedSetOwner(
>FSDirectory fsd, String src, String username, String groupname)
>throws FileNotFoundException, UnresolvedLinkException,
>QuotaExceededException, SnapshotAccessControlException {
>  assert fsd.hasWriteLock();
> +boolean status = false;
>  final INodesInPath inodesInPath = fsd.getINodesInPath4Write(src, true);
>  INode inode = inodesInPath.getLastINode();
>  if (inode == null) {
>throw new FileNotFoundException("File does not exist: " + src);
>  }
>  if (username != null) {
> +  status = true;
>inode = inode.setUser(username, inodesInPath.getLatestSnapshotId());
>  }
>  if (groupname != null) {
> +  status = true;
>inode.setGroup(groupname, inodesInPath.getLatestSnapshotId());
>  }
> +
> +return status;
>}
>  
>static boolean setTimes(
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10955) Pass IIP for FSDirAttr methods

2016-10-04 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10955:
--
Status: Patch Available  (was: Open)

> Pass IIP for FSDirAttr methods
> --
>
> Key: HDFS-10955
> URL: https://issues.apache.org/jira/browse/HDFS-10955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10955.patch
>
>
> Methods should always use the resolved IIP instead of re-solving the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2016-10-04 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546018#comment-15546018
 ] 

Inigo Goiri commented on HDFS-10629:


[~jakace], a couple small comments:
* I'm not sure but I think that the configuraiton keys shouldn't be in 
{{HdfsClientConfigKeys}} but in {{DFSConfigKeys}} as they are server settings. 
Anybody to confirm? In addition, we need to add them to hdfs-default.xml.
* {{Router#initAndStartRouter()}} needs to use the {{configuration}} parameter 
and not the internal.

[~mingma], I eventually would like the statistics to {{NamenodeStatusReport}} 
to report it in the UI as an aggregate view but let's leave it for then.

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546016#comment-15546016
 ] 

Kihwal Lee edited comment on HDFS-10956 at 10/4/16 5:17 PM:


Actually I've been thinking about doing this myself. We are getting closer to 
fulfilling the initial promise of "don't worry, snapshot won't affect you if 
you don't use it".


was (Author: kihwal):
Actually I've been thinking about doing this myself. Finally we are getting 
close to fulfilling the initial promise of "don't worry, snapshot won't affect 
you if you don't use it".

> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10683) Make class Token$PrivateToken private

2016-10-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10683:
--
Status: Patch Available  (was: In Progress)

Re-submit 002 to trigger test again.

> Make class Token$PrivateToken private
> -
>
> Key: HDFS-10683
> URL: https://issues.apache.org/jira/browse/HDFS-10683
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: fs, ha, security, security_token
> Attachments: HDFS-10683.001.patch, HDFS-10683.002.patch
>
>
> Avoid {{instanceof}} or typecasting of {{Toke.PrivateToken}} by introducing 
> an interface method in {{Token}}. Make class {{Toke.PrivateToken}} private. 
> Use a factory method instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546016#comment-15546016
 ] 

Kihwal Lee commented on HDFS-10956:
---

Actually I've been thinking about doing this myself. Finally we are getting 
close to fulfilling the initial promise of "don't worry, snapshot won't affect 
you if you don't use it".

> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-10-04 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang updated HDFS-10429:

Attachment: HDFS-10429.2.patch

New patch according to [~jingzhao]'s comments. Log level hasn't been changed so 
that we won't hide problems.

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, HDFS-10429.2.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10957) Retire BKJM from trunk

2016-10-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10957:
-
Hadoop Flags: Incompatible change
Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Retire BKJM from trunk
> --
>
> Key: HDFS-10957
> URL: https://issues.apache.org/jira/browse/HDFS-10957
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10957-01.patch
>
>
> Based on discussions done in mailing list 
> [here|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3c1288296033.5942327.1469727453010.javamail.ya...@mail.yahoo.com%3E]
>  BKJM is no longer required as QJM supports the required functionality with 
> minimum deployment overhead.
> This Jira is for tracking purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10957) Retire BKJM from trunk

2016-10-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10957:
-
Attachment: HDFS-10957-01.patch

Attaching patch for jenkins

> Retire BKJM from trunk
> --
>
> Key: HDFS-10957
> URL: https://issues.apache.org/jira/browse/HDFS-10957
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10957-01.patch
>
>
> Based on discussions done in mailing list 
> [here|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3c1288296033.5942327.1469727453010.javamail.ya...@mail.yahoo.com%3E]
>  BKJM is no longer required as QJM supports the required functionality with 
> minimum deployment overhead.
> This Jira is for tracking purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10878) TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws ConcurrentModificationException

2016-10-04 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545903#comment-15545903
 ] 

Rushabh S Shah commented on HDFS-10878:
---

Thanks [~kihwal] for reviews and committing.

> TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws 
> ConcurrentModificationException
> ---
>
> Key: HDFS-10878
> URL: https://issues.apache.org/jira/browse/HDFS-10878
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10878-1.patch, HDFS-10878.patch
>
>
> This failed in our internal build
> {noformat}
> java.util.ConcurrentModificationException: null
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
>   at java.util.ArrayList$Itr.next(ArrayList.java:851)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendUCParts(BlockInfoContiguousUnderConstruction.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendStringTo(BlockInfoContiguousUnderConstruction.java:382)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.toString(BlockInfoContiguousUnderConstruction.java:375)
>   at java.lang.String.valueOf(String.java:2982)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at 
> org.apache.hadoop.hdfs.protocol.ExtendedBlock.toString(ExtendedBlock.java:121)
>   at com.google.common.base.Joiner.toString(Joiner.java:533)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:124)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:181)
>   at com.google.common.base.Joiner.join(Joiner.java:237)
>   at com.google.common.base.Joiner.join(Joiner.java:226)
>   at com.google.common.base.Joiner.join(Joiner.java:245)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:485)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:477)
>   at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$$EnhancerByMockitoWithCGLIB$$cca97ed1.complete()
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2303)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2279)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2243)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose(TestDFSClientRetries.java:507)
> {noformat}
> Its getting NPE in following Log message
> {code:title=TestDFSClientRetries.java|borderStyle=solid}
>  @Test
>   public void testIdempotentAllocateBlockAndClose() throws Exception {
> ...
> public Boolean answer(InvocationOnMock invocation) throws 
> Throwable {
>   // complete() may return false a few times before it returns
>   // true. We want to wait until it returns true, and then
>   // make it retry one more time after that.
>   LOG.info("Called complete(: " +
>   Joiner.on(",").join(invocation.getArguments()) + ")");
>...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10957) Retire BKJM from trunk

2016-10-04 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-10957:


 Summary: Retire BKJM from trunk
 Key: HDFS-10957
 URL: https://issues.apache.org/jira/browse/HDFS-10957
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Based on discussions done in mailing list 
[here|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3c1288296033.5942327.1469727453010.javamail.ya...@mail.yahoo.com%3E]
 BKJM is no longer required as QJM supports the required functionality with 
minimum deployment overhead.

This Jira is for tracking purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10956:
---
Attachment: HDFS-10956.patch

Simple breakout patch of other IIP changes.  Queries snapshot manager for 
snapshottable dirs.  If none exist, don't scan.

> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10956:
---
Status: Patch Available  (was: Open)

> Remove rename/delete performance penalty when not using snapshots
> -
>
> Key: HDFS-10956
> URL: https://issues.apache.org/jira/browse/HDFS-10956
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10956.patch
>
>
> When deleting or renaming directories, the entire subtree(s) is scanned for 
> snapshottable directories.  The performance penalty may become very expensive 
> for dense trees.  The snapshot manager knows if snapshots are in use, so 
> clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-10-04 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10629:
---
Status: In Progress  (was: Patch Available)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629-HDFS-10467-002.patch, 
> HDFS-10629-HDFS-10467-003.patch, HDFS-10629-HDFS-10467-004.patch, 
> HDFS-10629-HDFS-10467-005.patch, HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10956) Remove rename/delete performance penalty when not using snapshots

2016-10-04 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-10956:
--

 Summary: Remove rename/delete performance penalty when not using 
snapshots
 Key: HDFS-10956
 URL: https://issues.apache.org/jira/browse/HDFS-10956
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Daryn Sharp
Assignee: Daryn Sharp


When deleting or renaming directories, the entire subtree(s) is scanned for 
snapshottable directories.  The performance penalty may become very expensive 
for dense trees.  The snapshot manager knows if snapshots are in use, so 
clusters not using snapshots should not take the performance penalty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10878) TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws ConcurrentModificationException

2016-10-04 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10878:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   Status: Resolved  (was: Patch Available)

Committed to from trunk all the way to branch-2.7. Thanks for fixing this, 
[~shahrs87].

> TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws 
> ConcurrentModificationException
> ---
>
> Key: HDFS-10878
> URL: https://issues.apache.org/jira/browse/HDFS-10878
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10878-1.patch, HDFS-10878.patch
>
>
> This failed in our internal build
> {noformat}
> java.util.ConcurrentModificationException: null
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
>   at java.util.ArrayList$Itr.next(ArrayList.java:851)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendUCParts(BlockInfoContiguousUnderConstruction.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendStringTo(BlockInfoContiguousUnderConstruction.java:382)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.toString(BlockInfoContiguousUnderConstruction.java:375)
>   at java.lang.String.valueOf(String.java:2982)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at 
> org.apache.hadoop.hdfs.protocol.ExtendedBlock.toString(ExtendedBlock.java:121)
>   at com.google.common.base.Joiner.toString(Joiner.java:533)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:124)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:181)
>   at com.google.common.base.Joiner.join(Joiner.java:237)
>   at com.google.common.base.Joiner.join(Joiner.java:226)
>   at com.google.common.base.Joiner.join(Joiner.java:245)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:485)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:477)
>   at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$$EnhancerByMockitoWithCGLIB$$cca97ed1.complete()
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2303)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2279)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2243)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose(TestDFSClientRetries.java:507)
> {noformat}
> Its getting NPE in following Log message
> {code:title=TestDFSClientRetries.java|borderStyle=solid}
>  @Test
>   public void testIdempotentAllocateBlockAndClose() throws Exception {
> ...
> public Boolean answer(InvocationOnMock invocation) throws 
> Throwable {
>   // complete() may return false a few times before it returns
>   // true. We want to wait until it returns true, and then
>   // make it retry one more time after that.
>   LOG.info("Called complete(: " +
>   Joiner.on(",").join(invocation.getArguments()) + ")");
>...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10789) Route webhdfs through the RPC call queue

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545881#comment-15545881
 ] 

Hadoop QA commented on HDFS-10789:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 917 unchanged - 5 fixed = 925 total (was 922) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831389/HDFS-10789-1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f561fce8f501 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ef7f06f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16997/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16997/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16997/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16997/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-10878) TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws ConcurrentModificationException

2016-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545869#comment-15545869
 ] 

Hudson commented on HDFS-10878:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10539 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10539/])
HDFS-10878. TestDFSClientRetries#testIdempotentAllocateBlockAndClose (kihwal: 
rev 382307cbdd94107350fe6fad1acf87d63c9be9d6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java


> TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws 
> ConcurrentModificationException
> ---
>
> Key: HDFS-10878
> URL: https://issues.apache.org/jira/browse/HDFS-10878
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10878-1.patch, HDFS-10878.patch
>
>
> This failed in our internal build
> {noformat}
> java.util.ConcurrentModificationException: null
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
>   at java.util.ArrayList$Itr.next(ArrayList.java:851)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendUCParts(BlockInfoContiguousUnderConstruction.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendStringTo(BlockInfoContiguousUnderConstruction.java:382)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.toString(BlockInfoContiguousUnderConstruction.java:375)
>   at java.lang.String.valueOf(String.java:2982)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at 
> org.apache.hadoop.hdfs.protocol.ExtendedBlock.toString(ExtendedBlock.java:121)
>   at com.google.common.base.Joiner.toString(Joiner.java:533)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:124)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:181)
>   at com.google.common.base.Joiner.join(Joiner.java:237)
>   at com.google.common.base.Joiner.join(Joiner.java:226)
>   at com.google.common.base.Joiner.join(Joiner.java:245)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:485)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:477)
>   at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$$EnhancerByMockitoWithCGLIB$$cca97ed1.complete()
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2303)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2279)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2243)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose(TestDFSClientRetries.java:507)
> {noformat}
> Its getting NPE in following Log message
> {code:title=TestDFSClientRetries.java|borderStyle=solid}
>  @Test
>   public void testIdempotentAllocateBlockAndClose() throws Exception {
> ...
> public Boolean answer(InvocationOnMock invocation) throws 
> Throwable {
>   // complete() may return false a few times before it returns
>   // true. We want to wait until it returns true, and then
>   // make it retry one more time after that.
>   LOG.info("Called complete(: " +
>   Joiner.on(",").join(invocation.getArguments()) + ")");
>...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10878) TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws ConcurrentModificationException

2016-10-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545827#comment-15545827
 ] 

Kihwal Lee commented on HDFS-10878:
---

+1 looks good to me.

> TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws 
> ConcurrentModificationException
> ---
>
> Key: HDFS-10878
> URL: https://issues.apache.org/jira/browse/HDFS-10878
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10878-1.patch, HDFS-10878.patch
>
>
> This failed in our internal build
> {noformat}
> java.util.ConcurrentModificationException: null
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
>   at java.util.ArrayList$Itr.next(ArrayList.java:851)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendUCParts(BlockInfoContiguousUnderConstruction.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendStringTo(BlockInfoContiguousUnderConstruction.java:382)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.toString(BlockInfoContiguousUnderConstruction.java:375)
>   at java.lang.String.valueOf(String.java:2982)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at 
> org.apache.hadoop.hdfs.protocol.ExtendedBlock.toString(ExtendedBlock.java:121)
>   at com.google.common.base.Joiner.toString(Joiner.java:533)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:124)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:181)
>   at com.google.common.base.Joiner.join(Joiner.java:237)
>   at com.google.common.base.Joiner.join(Joiner.java:226)
>   at com.google.common.base.Joiner.join(Joiner.java:245)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:485)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:477)
>   at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$$EnhancerByMockitoWithCGLIB$$cca97ed1.complete()
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2303)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2279)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2243)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose(TestDFSClientRetries.java:507)
> {noformat}
> Its getting NPE in following Log message
> {code:title=TestDFSClientRetries.java|borderStyle=solid}
>  @Test
>   public void testIdempotentAllocateBlockAndClose() throws Exception {
> ...
> public Boolean answer(InvocationOnMock invocation) throws 
> Throwable {
>   // complete() may return false a few times before it returns
>   // true. We want to wait until it returns true, and then
>   // make it retry one more time after that.
>   LOG.info("Called complete(: " +
>   Joiner.on(",").join(invocation.getArguments()) + ")");
>...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10922:
---
Attachment: HDFS-10922.03.patch

> Adding additional unit tests for Trash
> --
>
> Key: HDFS-10922
> URL: https://issues.apache.org/jira/browse/HDFS-10922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HDFS-10922.02.patch, HDFS-10922.03.patch
>
>
> This ticket is opened to track adding unit tests for Trash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10922:
---
Attachment: HDFS-10922.04.patch

> Adding additional unit tests for Trash
> --
>
> Key: HDFS-10922
> URL: https://issues.apache.org/jira/browse/HDFS-10922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HDFS-10922.02.patch, HDFS-10922.03.patch, 
> HDFS-10922.04.patch
>
>
> This ticket is opened to track adding unit tests for Trash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10922) Adding additional unit tests for Trash

2016-10-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10922:
---
Status: Patch Available  (was: In Progress)

> Adding additional unit tests for Trash
> --
>
> Key: HDFS-10922
> URL: https://issues.apache.org/jira/browse/HDFS-10922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HDFS-10922.02.patch, HDFS-10922.03.patch
>
>
> This ticket is opened to track adding unit tests for Trash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10878) TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws ConcurrentModificationException

2016-10-04 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545654#comment-15545654
 ] 

Rushabh S Shah commented on HDFS-10878:
---

{noformat}
Failed junit tests:
   hadoop.hdfs.server.namenode.TestDecommissioningStatus
   hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation 
{noformat}
Both of this tests passes fine on my local machine.

> TestDFSClientRetries#testIdempotentAllocateBlockAndClose throws 
> ConcurrentModificationException
> ---
>
> Key: HDFS-10878
> URL: https://issues.apache.org/jira/browse/HDFS-10878
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10878-1.patch, HDFS-10878.patch
>
>
> This failed in our internal build
> {noformat}
> java.util.ConcurrentModificationException: null
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
>   at java.util.ArrayList$Itr.next(ArrayList.java:851)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendUCParts(BlockInfoContiguousUnderConstruction.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendStringTo(BlockInfoContiguousUnderConstruction.java:382)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.toString(BlockInfoContiguousUnderConstruction.java:375)
>   at java.lang.String.valueOf(String.java:2982)
>   at java.lang.StringBuilder.append(StringBuilder.java:131)
>   at 
> org.apache.hadoop.hdfs.protocol.ExtendedBlock.toString(ExtendedBlock.java:121)
>   at com.google.common.base.Joiner.toString(Joiner.java:533)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:124)
>   at com.google.common.base.Joiner.appendTo(Joiner.java:181)
>   at com.google.common.base.Joiner.join(Joiner.java:237)
>   at com.google.common.base.Joiner.join(Joiner.java:226)
>   at com.google.common.base.Joiner.join(Joiner.java:245)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:485)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:477)
>   at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$$EnhancerByMockitoWithCGLIB$$cca97ed1.complete()
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2303)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2279)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2243)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose(TestDFSClientRetries.java:507)
> {noformat}
> Its getting NPE in following Log message
> {code:title=TestDFSClientRetries.java|borderStyle=solid}
>  @Test
>   public void testIdempotentAllocateBlockAndClose() throws Exception {
> ...
> public Boolean answer(InvocationOnMock invocation) throws 
> Throwable {
>   // complete() may return false a few times before it returns
>   // true. We want to wait until it returns true, and then
>   // make it retry one more time after that.
>   LOG.info("Called complete(: " +
>   Joiner.on(",").join(invocation.getArguments()) + ")");
>...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10609) Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications

2016-10-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10609:
---
Attachment: HDFS-10609.branch-2.7.patch

The 2.7 patch passed in my local Yetus. Maybe it's something to do with the 
patch file name I guess? Upload a new patch file with a more "normalized" file 
name.

> Uncaught InvalidEncryptionKeyException during pipeline recovery may abort 
> downstream applications
> -
>
> Key: HDFS-10609
> URL: https://issues.apache.org/jira/browse/HDFS-10609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
> Environment: CDH5.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10609.001.patch, HDFS-10609.002.patch, 
> HDFS-10609.003.patch, HDFS-10609.004.patch, HDFS-10609.005.patch, 
> HDFS-10609.branch-2.7.patch
>
>
> In normal operations, if SASL negotiation fails due to 
> {{InvalidEncryptionKeyException}}, it is typically a benign exception, which 
> is caught and retried :
> {code:title=SaslDataTransferServer#doSaslHandshake}
>   if (ioe instanceof SaslException &&
>   ioe.getCause() != null &&
>   ioe.getCause() instanceof InvalidEncryptionKeyException) {
> // This could just be because the client is long-lived and hasn't gotten
> // a new encryption key from the NN in a while. Upon receiving this
> // error, the client will get a new encryption key from the NN and retry
> // connecting to this DN.
> sendInvalidKeySaslErrorMessage(out, ioe.getCause().getMessage());
>   } 
> {code}
> {code:title=DFSOutputStream.DataStreamer#createBlockOutputStream}
> if (ie instanceof InvalidEncryptionKeyException && refetchEncryptionKey > 0) {
> DFSClient.LOG.info("Will fetch a new encryption key and retry, " 
> + "encryption key was invalid when connecting to "
> + nodes[0] + " : " + ie);
> {code}
> However, if the exception is thrown during pipeline recovery, the 
> corresponding code does not handle it properly, and the exception is spilled 
> out to downstream applications, such as SOLR, aborting its operation:
> {quote}
> 2016-07-06 12:12:51,992 ERROR org.apache.solr.update.HdfsTransactionLog: 
> Exception closing tlog.
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:1308)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1272)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1433)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1147)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:632)
> 2016-07-06 12:12:51,997 ERROR org.apache.solr.update.CommitTracker: auto 
> commit error...:org.apache.solr.common.SolrException: 
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.solr.update.HdfsTransactionLog.close(HdfsTransactionLog.java:316)
> at 
> org.apache.solr.update.TransactionLog.decref(TransactionLog.java:505)
> at org.apache.solr.update.UpdateLog.addOldLog(UpdateLog.java:380)
> at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:676)
> at 
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:623)
> at 

[jira] [Updated] (HDFS-10609) Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications

2016-10-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10609:
---
Attachment: (was: HDFS-10609.branch-2.7.01.patch)

> Uncaught InvalidEncryptionKeyException during pipeline recovery may abort 
> downstream applications
> -
>
> Key: HDFS-10609
> URL: https://issues.apache.org/jira/browse/HDFS-10609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
> Environment: CDH5.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10609.001.patch, HDFS-10609.002.patch, 
> HDFS-10609.003.patch, HDFS-10609.004.patch, HDFS-10609.005.patch
>
>
> In normal operations, if SASL negotiation fails due to 
> {{InvalidEncryptionKeyException}}, it is typically a benign exception, which 
> is caught and retried :
> {code:title=SaslDataTransferServer#doSaslHandshake}
>   if (ioe instanceof SaslException &&
>   ioe.getCause() != null &&
>   ioe.getCause() instanceof InvalidEncryptionKeyException) {
> // This could just be because the client is long-lived and hasn't gotten
> // a new encryption key from the NN in a while. Upon receiving this
> // error, the client will get a new encryption key from the NN and retry
> // connecting to this DN.
> sendInvalidKeySaslErrorMessage(out, ioe.getCause().getMessage());
>   } 
> {code}
> {code:title=DFSOutputStream.DataStreamer#createBlockOutputStream}
> if (ie instanceof InvalidEncryptionKeyException && refetchEncryptionKey > 0) {
> DFSClient.LOG.info("Will fetch a new encryption key and retry, " 
> + "encryption key was invalid when connecting to "
> + nodes[0] + " : " + ie);
> {code}
> However, if the exception is thrown during pipeline recovery, the 
> corresponding code does not handle it properly, and the exception is spilled 
> out to downstream applications, such as SOLR, aborting its operation:
> {quote}
> 2016-07-06 12:12:51,992 ERROR org.apache.solr.update.HdfsTransactionLog: 
> Exception closing tlog.
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:1308)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1272)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1433)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1147)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:632)
> 2016-07-06 12:12:51,997 ERROR org.apache.solr.update.CommitTracker: auto 
> commit error...:org.apache.solr.common.SolrException: 
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.solr.update.HdfsTransactionLog.close(HdfsTransactionLog.java:316)
> at 
> org.apache.solr.update.TransactionLog.decref(TransactionLog.java:505)
> at org.apache.solr.update.UpdateLog.addOldLog(UpdateLog.java:380)
> at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:676)
> at 
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:623)
> at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at 

  1   2   >