[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948221#comment-14948221
 ] 

Hudson commented on HDFS-9137:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2408 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2408/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9129:

Attachment: HDFS-9129.002.patch

Thank you [~jingzhao]. The v2 patch addresses the comments.

Thank you [~daryn] for your input. I briefly illustrate the current design as 
follows. The patch is not very completed and further refactor may be necessary.

Basically, the patch is to split the name node safe mode to two levels. The 
first one is the {{FSNamesystem}} and the second one is 
{{BlockManagerSafeMode}}. The main code change is two parts:
# The first-level safe mode code is kept in  {{FSNamesystem}}
# The second-level safe mode is moved to {{blockmanagement}} package

At beginning, the name node is in *STARTUP* safe mode, where the block manager 
is tracking blocks and data nodes. The name node will leave *STARTUP* mode to
* *OFF*: if either of the two conditions is reached
*# The second level safe mode is *OFF*. This is the case that block manger 
leaves safe mode automatically once threshold and extension are met
*# administrator operates to leave safe mode manually
* *MANUALLY*: administrator operates to enter safe mode manually
* *RESOURCE_LOW*: resource low monitored

The first level safe mode is a simple state machine. Other transitions like 
*MANUALLY* to *OFF* is straight-forward.

As inferred from above, the second level is meaningful and valid if and only if 
the first level safe mode is in *STARTUP*. At beginning, the block manager is 
in *INITIALIZED* mode, and it will leave this mode if:
* thresholds are met (to *OFF*) mode as no extension is needed
* thresholds are not met (to *THRESHOLD* mode)

The *THRESHOLD* mode is pending on block and data node thresholds. If the 
thresholds are met, the block manager will leave this mode, and change to:
* *OFF* if extension is not needed (e.g. {{extension}} config value is 0)
* *EXTENSION* if extension is needed

The *EXTENSION* mode is pending on extension period. The block manager will 
leave this mode to *OFF* if the two conditions are reached:
* extension period is reached (checked by a monitor thread)
* thresholds are met

The main design motivation is that the {{FSNamesystem}} and {{BlockManager}} 
maintain their own states by themselves.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948247#comment-14948247
 ] 

Hadoop QA commented on HDFS-9167:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  23m 37s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 27s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   4m 47s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   0m 14s | Post-patch findbugs 
hadoop-client compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 28s | Post-patch findbugs hadoop-dist 
compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 42s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs-nfs compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 55s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal compilation is broken. |
| {color:red}-1{color} | findbugs |   1m  9s | Post-patch findbugs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 23s | Post-patch findbugs 
hadoop-mapreduce-project/hadoop-mapreduce-examples compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 37s | Post-patch findbugs 
hadoop-tools/hadoop-archives compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 50s | Post-patch findbugs 
hadoop-tools/hadoop-datajoin compilation is broken. |
| {color:red}-1{color} | findbugs |   2m  4s | Post-patch findbugs 
hadoop-tools/hadoop-distcp compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 18s | Post-patch findbugs 
hadoop-tools/hadoop-extras compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 31s | Post-patch findbugs 
hadoop-tools/hadoop-gridmix compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 45s | Post-patch findbugs 
hadoop-tools/hadoop-rumen compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 58s | Post-patch findbugs 
hadoop-tools/hadoop-streaming compilation is broken. |
| {color:green}+1{color} | findbugs |   2m 58s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   0m 11s | Pre-build of native portion |
| {color:green}+1{color} | client tests |   0m 12s | Tests passed in 
hadoop-client. |
| {color:green}+1{color} | dist tests |   0m 12s | Tests passed in hadoop-dist. 
|
| {color:green}+1{color} | mapreduce tests |   5m 56s | Tests passed in 
hadoop-mapreduce-client-hs. |
| {color:green}+1{color} | mapreduce tests |   0m 37s | Tests passed in 
hadoop-mapreduce-examples. |
| {color:green}+1{color} | tools/hadoop tests |   0m 56s | Tests passed in 
hadoop-archives. |
| {color:green}+1{color} | tools/hadoop tests |   0m 23s | Tests passed in 
hadoop-datajoin. |
| {color:green}+1{color} | tools/hadoop tests |   6m 31s | Tests passed in 
hadoop-distcp. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-extras. |
| {color:green}+1{color} | tools/hadoop tests |  14m 46s | Tests passed in 
hadoop-gridmix. |
| {color:green}+1{color} | tools/hadoop tests |   0m 21s | Tests passed in 
hadoop-rumen. |
| {color:green}+1{color} | tools/hadoop tests |   6m 21s | Tests passed in 
hadoop-streaming. |
| {color:green}+1{color} | hdfs tests |   1m 48s | Tests passed in 
hadoop-hdfs-nfs. |
| {color:green}+1{color} | hdfs tests |   2m 49s | Tests passed in bkjournal. |
| | |  94m  6s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765529/HDFS-9167.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12855/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-nfs.html
 |
| Pre-patch Findbugs warnings | 

[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948248#comment-14948248
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #507 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/507/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948249#comment-14948249
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1235 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1235/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948288#comment-14948288
 ] 

Hadoop QA commented on HDFS-8777:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   9m 35s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m  0s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 43s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  9s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 185m 45s | Tests failed in hadoop-hdfs. |
| | | 212m 33s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
| Timed out tests | org.apache.hadoop.hdfs.TestFileAppend2 |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765435/HDFS-8777-02.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 35affec |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12852/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12852/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12852/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12852/console |


This message was automatically generated.

> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948159#comment-14948159
 ] 

Yongjun Zhang commented on HDFS-8164:
-

Committed to trunk and branch-2. Thanks Xiao for the contribution.

Thanks [~cnauroth] for reporting the issue and [~vinayrpet] for the review.





> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-8164:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948164#comment-14948164
 ] 

Hadoop QA commented on HDFS-8941:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  30m 14s | Pre-patch trunk has 748 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |  11m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  15m 47s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 36s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   4m 21s | The applied patch generated  1 
new checkstyle issues (total was 21, now 21). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 48s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   6m 56s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 43s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 112m 53s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 42s | Tests passed in 
hadoop-hdfs-client. |
| | | 190m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestClusterId |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | org.apache.hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
|   | org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765509/HDFS-8941-04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 35affec |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/console |


This message was automatically generated.

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948225#comment-14948225
 ] 

Hadoop QA commented on HDFS-9139:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  19m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 14 new or modified test files. |
| {color:green}+1{color} | javac |   8m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 31s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 22s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 33s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m 11s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 44s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 47s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  66m 41s | Tests failed in hadoop-hdfs. |
| | | 118m  6s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.web.TestWebHDFS |
| Timed out tests | org.apache.hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762600/HDFS-9139.03.patch |
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 35affec |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12853/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12853/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12853/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12853/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12853/console |


This message was automatically generated.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9071) chooseTargets in ReplicationWork may pass incomplete srcPath

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948205#comment-14948205
 ] 

Hadoop QA commented on HDFS-9071:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  9s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 19s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 20s | The applied patch generated  3 
new checkstyle issues (total was 177, now 179). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 188m 47s | Tests passed in hadoop-hdfs. 
|
| | | 234m 35s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765510/HDFS-9071.0003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 35affec |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12848/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12848/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12848/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12848/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12848/console |


This message was automatically generated.

> chooseTargets in ReplicationWork may pass incomplete srcPath
> 
>
> Key: HDFS-9071
> URL: https://issues.apache.org/jira/browse/HDFS-9071
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-9071.0001.patch, HDFS-9071.0001.patch, 
> HDFS-9071.0003.patch
>
>
> I've observed that chooseTargets in ReplicationWork may pass incomplete 
> srcPath (not starting with '/') to block placement policy.
> It is possible that srcPath is extensively used in custom placement policy. 
> In this case, the incomplete srcPath may further cause AssertionError if try 
> to get INode with it inside placement policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948286#comment-14948286
 ] 

Hadoop QA commented on HDFS-9139:
-

(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12859/console in case of 
problems.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948350#comment-14948350
 ] 

Hudson commented on HDFS-9137:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #470 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/470/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948156#comment-14948156
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8593 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8593/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: HDFS-9167.001.patch

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch, HDFS-9167.001.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-08 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-1172:
---
Attachment: HDFS-1172.011.patch

I attached updated patch as 011.

* pendingReplications is updated only before file completeion.
* refactored test code in TestReplication, using mockito rather than adding 
test code to BPOfferService.

> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, replicateBlocksFUC.patch, 
> replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948168#comment-14948168
 ] 

Vinayakumar B commented on HDFS-8630:
-

Thanks [~surendrasingh] for working on this.

Have few comments
1. Need to rebase the patch with latest trunk code.
2. Some places formatting is not proper, so please format code for changed lines
3. In WebHdfsFileSystem.java {{XAttrEncodingParam}} not required to pass.
  {code}+return new FsPathResponseRunner(
+op, null, new XAttrEncodingParam(XAttrCodec.HEX)) {{code}

4. In Tests, {{testGetAllStoragePolicy}}, I think you are just doing duplicate 
work. webhdfs also uses HTTP to get storage policies, and you are explicitly 
querying via PUT request. I think it would be better to get expected values via 
DFS, and actual values via both WebHDFS and direct PUT query.

5. Same #4, In {{testGetandSetStoragePolicy}}, each operation you can cross 
verify again by using DFS.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948375#comment-14948375
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #498 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/498/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9034) "StorageTypeStats" Metric should not count failed storage.

2015-10-08 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948125#comment-14948125
 ] 

Surendra Singh Lilhore commented on HDFS-9034:
--

The failed tests and release audit warning are unrelated. I will fix checkstyle 
issues .
Please review..

> "StorageTypeStats" Metric should not count failed storage.
> --
>
> Key: HDFS-9034
> URL: https://issues.apache.org/jira/browse/HDFS-9034
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9034.01.patch, HDFS-9034.02.patch, 
> dfsStorage_NN_UI2.png
>
>
> When we remove one storage type from all the DNs, still NN UI shows entry of 
> those storage type --
> Ex:for ARCHIVE
> Steps--
> 1. ARCHIVE Storage type was added for all DNs
> 2. Stop DNs
> 3. Removed ARCHIVE Storages from all DNs
> 4. Restarted DNs
> NN UI shows below --
> DFS Storage Types
> Storage Type Configured Capacity Capacity Used Capacity Remaining 
> ARCHIVE   57.18 GB64 KB (0%)  39.82 GB (69.64%)   64 KB   
> 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948186#comment-14948186
 ] 

Hudson commented on HDFS-9137:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2441 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2441/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9071) chooseTargets in ReplicationWork may pass incomplete srcPath

2015-10-08 Thread He Tianyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Tianyi updated HDFS-9071:

Attachment: HDFS-9071.0004.patch

Fix jenkins

> chooseTargets in ReplicationWork may pass incomplete srcPath
> 
>
> Key: HDFS-9071
> URL: https://issues.apache.org/jira/browse/HDFS-9071
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-9071.0001.patch, HDFS-9071.0001.patch, 
> HDFS-9071.0003.patch, HDFS-9071.0004.patch
>
>
> I've observed that chooseTargets in ReplicationWork may pass incomplete 
> srcPath (not starting with '/') to block placement policy.
> It is possible that srcPath is extensively used in custom placement policy. 
> In this case, the incomplete srcPath may further cause AssertionError if try 
> to get INode with it inside placement policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948508#comment-14948508
 ] 

Hudson commented on HDFS-8164:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2409 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2409/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948576#comment-14948576
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #471 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/471/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948430#comment-14948430
 ] 

Hadoop QA commented on HDFS-4167:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 55s | Pre-patch trunk has 748 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 26s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 37s | The applied patch generated  2 
new checkstyle issues (total was 157, now 158). |
| {color:red}-1{color} | checkstyle |   3m 39s | The applied patch generated  2 
new checkstyle issues (total was 98, now 98). |
| {color:red}-1{color} | whitespace |   0m 36s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 51s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   7m 50s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 189m  7s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 251m 32s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765523/HDFS-4167.08.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 35affec |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/diffcheckstylehadoop-common.txt
 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12854/console |


This message was automatically generated.

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Ajith S
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch, 
> HDFS-4167.05.patch, HDFS-4167.06.patch, HDFS-4167.07.patch, HDFS-4167.08.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948482#comment-14948482
 ] 

Hadoop QA commented on HDFS-9129:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   8m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 50s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 34s | The applied patch generated  
32 new checkstyle issues (total was 626, now 604). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 48s | The patch appears to introduce 3 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 24s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 105m 37s | Tests failed in hadoop-hdfs. |
| | | 155m 12s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestGetBlockLocations |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.namenode.TestSecurityTokenEditLog |
|   | hadoop.hdfs.server.namenode.TestCommitBlockSynchronization |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeRpcServer |
|   | org.apache.hadoop.hdfs.server.mover.TestMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765536/HDFS-9129.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12858/console |


This message was automatically generated.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948434#comment-14948434
 ] 

Hadoop QA commented on HDFS-9139:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  18m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 14 new or modified test files. |
| {color:green}+1{color} | javac |   8m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 25s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |  49m 42s | Tests passed in hadoop-hdfs. 
|
| | |  96m  2s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762600/HDFS-9139.03.patch |
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12859/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12859/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12859/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12859/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12859/console |


This message was automatically generated.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8373) Ec files can't be deleted into Trash because of that Trash isn't EC zone.

2015-10-08 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948449#comment-14948449
 ] 

GAO Rui commented on HDFS-8373:
---

[~zhz], [~brahmareddy] Great! Thank you for working on this!

> Ec files can't be deleted into Trash because of that Trash isn't EC zone.
> -
>
> Key: HDFS-8373
> URL: https://issues.apache.org/jira/browse/HDFS-8373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>Assignee: Brahma Reddy Battula
>  Labels: EC
>
> When EC files were deleted, they would be moved into {{Trash}} directory. 
> But, EC files can only be placed under EC zone. So, EC files which have been 
> deleted can not be moved to {{Trash}} directory.
> Problem could be solved by creating a EC zone(floder) inside {{Trash}} to 
> contain deleted EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 branch-2 backport

2015-10-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-9211:
-
Component/s: build

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 
> branch-2 backport
> ---
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.0
>
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948487#comment-14948487
 ] 

Rakesh R commented on HDFS-8941:


Please ignore test case failure and checkstyle warning, its not related to the 
patch.

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-08 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8630:
-
Attachment: HDFS-8630.002.patch

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948404#comment-14948404
 ] 

Hadoop QA commented on HDFS-8941:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 12s | Pre-patch trunk has 748 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  9s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 22s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 57s | The applied patch generated  1 
new checkstyle issues (total was 21, now 21). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  90m 10s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 31s | Tests passed in 
hadoop-hdfs-client. |
| | | 142m 59s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765509/HDFS-8941-04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12856/console |


This message was automatically generated.

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948405#comment-14948405
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2442 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2442/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-08 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948506#comment-14948506
 ] 

Surendra Singh Lilhore commented on HDFS-8630:
--

Thanks [~vinayrpet] for review..
Attached updated patch...
Please review...

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-08 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_1.patch

Attatched the changes.
Please review

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-08 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Status: Patch Available  (was: Open)

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9053) Support large directories efficiently using B-Tree

2015-10-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948629#comment-14948629
 ] 

Yi Liu commented on HDFS-9053:
--

Hi [~szetszwo], do you have further comments about it? Thanks. 

> Support large directories efficiently using B-Tree
> --
>
> Key: HDFS-9053
> URL: https://issues.apache.org/jira/browse/HDFS-9053
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-9053 (BTree with simple benchmark).patch, HDFS-9053 
> (BTree).patch, HDFS-9053.001.patch, HDFS-9053.002.patch, HDFS-9053.003.patch, 
> HDFS-9053.004.patch
>
>
> This is a long standing issue, we were trying to improve this in the past.  
> Currently we use an ArrayList for the children under a directory, and the 
> children are ordered in the list, for insert/delete, the time complexity is 
> O\(n), (the search is O(log n), but insertion/deleting causes re-allocations 
> and copies of arrays), for large directory, the operations are expensive.  If 
> the children grow to 1M size, the ArrayList will resize to > 1M capacity, so 
> need > 1M * 8bytes = 8M (the reference size is 8 for 64-bits system/JVM) 
> continuous heap memory, it easily causes full GC in HDFS cluster where 
> namenode heap memory is already highly used.  I recap the 3 main issues:
> # Insertion/deletion operations in large directories are expensive because 
> re-allocations and copies of big arrays.
> # Dynamically allocate several MB continuous heap memory which will be 
> long-lived can easily cause full GC problem.
> # Even most children are removed later, but the directory INode still 
> occupies same size heap memory, since the ArrayList will never shrink.
> This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to 
> solve the problem suggested by [~shv]. 
> So the target of this JIRA is to implement a low memory footprint B-Tree and 
> use it to replace ArrayList. 
> If the elements size is not large (less than the maximum degree of B-Tree 
> node), the B-Tree only has one root node which contains an array for the 
> elements. And if the size grows large enough, it will split automatically, 
> and if elements are removed, then B-Tree nodes can merge automatically (see 
> more: https://en.wikipedia.org/wiki/B-tree).  It will solve the above 3 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9139:

Attachment: HDFS-9139.04.patch

Uploading patch
Removed unnecessary change in DFSConfigKeys.java

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9208) Disabling atime may fail clients like distCp

2015-10-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948658#comment-14948658
 ] 

Kihwal Lee commented on HDFS-9208:
--

Really #3 is the only option. I was wrong about atime being 0. When {{INode}} 
is created, the atime and mtime are set to identical value. option 1) and 2) 
are not possible.

> Disabling atime may fail clients like distCp
> 
>
> Key: HDFS-9208
> URL: https://issues.apache.org/jira/browse/HDFS-9208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Mingliang Liu
>
> When atime is disabled, {{setTimes()}} throws an exception if the passed-in 
> atime is not -1.  But since atime is not -1, distCp fails when it tries to 
> set the mtime and atime. 
> There are several options:
> 1) make distCp check for 0 atime and call {{setTimes()}} with -1. I am not 
> very enthusiastic about it.
> 2) make NN also accept 0 atime in addition to -1, when the atime support is 
> disabled.
> 3) support setting mtime & atime regardless of the atime support.  The main 
> reason why atime is disabled is to avoid edit logging/syncing during 
> {{getBlockLocations()}} read calls. Explicit setting can be allowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9208) Disabling atime may fail clients like distCp

2015-10-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9208:
-
Description: 
When atime is disabled, {{setTimes()}} throws an exception if the passed-in 
atime is not -1.  But since atime is not -1, distCp fails when it tries to set 
the mtime and atime. 

There are several options:

1) make distCp check for 0 atime and call {{setTimes()}} with -1. I am not very 
enthusiastic about it.
2) make NN also accept 0 atime in addition to -1, when the atime support is 
disabled.
3) support setting mtime & atime regardless of the atime support.  The main 
reason why atime is disabled is to avoid edit logging/syncing during 
{{getBlockLocations()}} read calls. Explicit setting can be allowed.

  was:
When atime is disabled, {{setTimes()}} throws an exception if the passed-in 
atime is not -1.  But since atime is 0, distCp fails when it tries to set the 
mtime and atime. 

There are several options:

1) make distCp check for 0 atime and call {{setTimes()}} with -1. I am not very 
enthusiastic about it.
2) make NN also accept 0 atime in addition to -1, when the atime support is 
disabled.
3) support setting mtime & atime regardless of the atime support.  The main 
reason why atime is disabled is to avoid edit logging/syncing during 
{{getBlockLocations()}} read calls. Explicit setting can be allowed.


> Disabling atime may fail clients like distCp
> 
>
> Key: HDFS-9208
> URL: https://issues.apache.org/jira/browse/HDFS-9208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Mingliang Liu
>
> When atime is disabled, {{setTimes()}} throws an exception if the passed-in 
> atime is not -1.  But since atime is not -1, distCp fails when it tries to 
> set the mtime and atime. 
> There are several options:
> 1) make distCp check for 0 atime and call {{setTimes()}} with -1. I am not 
> very enthusiastic about it.
> 2) make NN also accept 0 atime in addition to -1, when the atime support is 
> disabled.
> 3) support setting mtime & atime regardless of the atime support.  The main 
> reason why atime is disabled is to avoid edit logging/syncing during 
> {{getBlockLocations()}} read calls. Explicit setting can be allowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9173) Erasure Coding: Lease recovery for striped file

2015-10-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9173:

Attachment: HDFS-9173.00.wip.patch

> Erasure Coding: Lease recovery for striped file
> ---
>
> Key: HDFS-9173
> URL: https://issues.apache.org/jira/browse/HDFS-9173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9173.00.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948716#comment-14948716
 ] 

Hadoop QA commented on HDFS-1172:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 22s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 34s | The applied patch generated  8 
new checkstyle issues (total was 438, now 443). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 48s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 40s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 235m 45s | Tests failed in hadoop-hdfs. |
| | | 286m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765552/HDFS-1172.011.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12860/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12860/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12860/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12860/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12860/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12860/console |


This message was automatically generated.

> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, replicateBlocksFUC.patch, 
> replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: HDFS-9123.004.patch

removed a line of comment per Yongjun's comment.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.004.patch, HDFS-9181.002.patch, 
> HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-10-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen reassigned HDFS-9144:


Assignee: Bob Hansen

> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9071) chooseTargets in ReplicationWork may pass incomplete srcPath

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948600#comment-14948600
 ] 

Hadoop QA commented on HDFS-9071:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 14s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 31s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  1 
new checkstyle issues (total was 176, now 176). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 220m 10s | Tests failed in hadoop-hdfs. |
| | | 266m 55s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765532/HDFS-9071.0004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12857/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12857/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12857/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12857/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12857/console |


This message was automatically generated.

> chooseTargets in ReplicationWork may pass incomplete srcPath
> 
>
> Key: HDFS-9071
> URL: https://issues.apache.org/jira/browse/HDFS-9071
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-9071.0001.patch, HDFS-9071.0001.patch, 
> HDFS-9071.0003.patch, HDFS-9071.0004.patch
>
>
> I've observed that chooseTargets in ReplicationWork may pass incomplete 
> srcPath (not starting with '/') to block placement policy.
> It is possible that srcPath is extensively used in custom placement policy. 
> In this case, the incomplete srcPath may further cause AssertionError if try 
> to get INode with it inside placement policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948657#comment-14948657
 ] 

Hadoop QA commented on HDFS-9139:
-

(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12863/console in case of 
problems.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-10-08 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948661#comment-14948661
 ] 

Bob Hansen commented on HDFS-9144:
--

A proposed structure to start:

__C-API__
hdfslib-compatible layer that is a thin wrapper around C++-posixish-API

__C++-posixish-API__
Stateful quasi-posix API that will be familiar and easy to consume
Embodies sane default policies and strategies for common operations
implements all asynchronous operations
has synchronous helpers for all asynchronous operations
Wrapper around functional-API, below

FileSystem:
constructs with config object
open() returns a FileHandle
common NN operations
holds state for dead DNs
shared state and thread-safe (implement single lock for FS?)
owns and is wrapper around NameNodeConnection

FileHandle:
supports implicit position and streaming reads (posixy)
stateful and single-threaded with the exception of cancellation method
thread-safe cancel() method will cancel any outstanding I/Os and 
deliver a cancellation error to its continuation
implements reliable reads and error recovery
Maintains a pointer to the posixy-FileSystem for operations on the dead 
DN 
Owns block map
Read operation: will pick appropriate DNConnections and 
Will eventually cache DNConnections

__functional-API__
Low-level implementation of composible asynchronous blocks

NameNodeConnection: 
Has all configuration params explicitly passed in/set
Owns TCP connection to NN
Encapsulates method call to Message construction
Refactoring of the current FileSystemImpl object
Thread-safe methods
May be connected or not

DataNodeConnection:
Owns TCP connection to the DN
Owns RpcEngine
Encapsulates method call to Message construction
Encapsulates connecting and handshaking to the DN
Thread-safe methods
May be connected or not

AsyncReadBlockOperation: 
Ephemeral object; performs operation once and is done
Takes a DataNodeConnection, block extents as input
Connects DataNodeConnection if necessary and makes RPC calls to read 
data
Single-threaded (although wil. have callbacks from asio and will call 
into consuler handler from asio thread) outside of cancel() method
Encapsulation of current InputStreamImpl::AsyncReadBlock method and its 
associated state

PositionalReadOperation:
Ephemeral object; performs operation once and is done
Owns BlockReadOperation
Owns DNConnection
Given block map and snapshot of dead DN list, creates a new DN 
connection and kicks off BlockReadOperation
Refactoring of current InputStreamImpl::PositionRead
Single-threaded outside of cancel() method
Cannot do DNConnection caching
Functional convenince object for those not using FileHandle
Some retry logic here?


> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948739#comment-14948739
 ] 

Hadoop QA commented on HDFS-9181:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765601/HDFS-9123.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12864/console |


This message was automatically generated.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.004.patch, HDFS-9181.002.patch, 
> HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: (was: HDFS-9123.004.patch)

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch, 
> HDFS-9181.004.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Attachment: HDFS-9110.06.patch

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: Patch Available  (was: In Progress)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948811#comment-14948811
 ] 

Hadoop QA commented on HDFS-9139:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:red}-1{color} | pre-patch |  16m 25s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 14 new or modified test files. |
| {color:green}+1{color} | javac |   8m 19s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 18s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 40s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  9s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  51m  8s | Tests failed in hadoop-hdfs. |
| | |  95m 15s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765591/HDFS-9139.04.patch |
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12863/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12863/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12863/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12863/console |


This message was automatically generated.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948824#comment-14948824
 ] 

Hadoop QA commented on HDFS-9110:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765615/HDFS-9110.06.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12866/console |


This message was automatically generated.

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-08 Thread Vinayakumar B
I think recent 2 builds looks more stable.
On Oct 8, 2015 8:45 PM, "Hadoop QA (JIRA)"  wrote:

>
> [
> https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948811#comment-14948811
> ]
>
> Hadoop QA commented on HDFS-9139:
> -
>
> \\
> \\
> | (x) *{color:red}-1 overall{color}* |
> \\
> \\
> || Vote || Subsystem || Runtime || Comment ||
> | {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
> | {color:red}-1{color} | pre-patch |  16m 25s | Findbugs (version )
> appears to be broken on trunk. |
> | {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as
> test-patch has been patched. |
> | {color:green}+1{color} | tests included |   0m  0s | The patch appears
> to include 14 new or modified test files. |
> | {color:green}+1{color} | javac |   8m 19s | There were no new javac
> warning messages. |
> | {color:green}+1{color} | javadoc |  10m 18s | There were no new javadoc
> warning messages. |
> | {color:red}-1{color} | release audit |   0m 19s | The applied patch
> generated 1 release audit warnings. |
> | {color:green}+1{color} | checkstyle |   0m 40s | There were no new
> checkstyle issues. |
> | {color:green}+1{color} | shellcheck |   0m  9s | There were no new
> shellcheck (v0.3.3) issues. |
> | {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines
> that end in whitespace. |
> | {color:green}+1{color} | install |   1m 38s | mvn install still works. |
> | {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built
> with eclipse:eclipse. |
> | {color:green}+1{color} | findbugs |   2m 28s | The patch does not
> introduce any new Findbugs (version 3.0.0) warnings. |
> | {color:green}+1{color} | native |   3m 11s | Pre-build of native portion
> |
> | {color:red}-1{color} | hdfs tests |  51m  8s | Tests failed in
> hadoop-hdfs. |
> | | |  95m 15s | |
> \\
> \\
> || Reason || Tests ||
> | Failed unit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
> \\
> \\
> || Subsystem || Report/Notes ||
> | Patch URL |
> http://issues.apache.org/jira/secure/attachment/12765591/HDFS-9139.04.patch
> |
> | Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
> | git revision | trunk / 1107bd3 |
> | Release Audit |
> https://builds.apache.org/job/PreCommit-HDFS-Build/12863/artifact/patchprocess/patchReleaseAuditProblems.txt
> |
> | hadoop-hdfs test log |
> https://builds.apache.org/job/PreCommit-HDFS-Build/12863/artifact/patchprocess/testrun_hadoop-hdfs.txt
> |
> | Test Results |
> https://builds.apache.org/job/PreCommit-HDFS-Build/12863/testReport/ |
> | Java | 1.7.0_55 |
> | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu
> SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
> | Console output |
> https://builds.apache.org/job/PreCommit-HDFS-Build/12863/console |
>
>
> This message was automatically generated.
>
> > Enable parallel JUnit tests for HDFS Pre-commit
> > 
> >
> > Key: HDFS-9139
> > URL: https://issues.apache.org/jira/browse/HDFS-9139
> > Project: Hadoop HDFS
> >  Issue Type: Improvement
> >Reporter: Vinayakumar B
> >Assignee: Vinayakumar B
> > Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch,
> HDFS-9139.03.patch, HDFS-9139.04.patch
> >
> >
> > Forked from HADOOP-11984,
> > With the initial and significant work from [~cnauroth], this Jira is to
> track and support parallel tests' run for HDFS Precommit
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948886#comment-14948886
 ] 

Hadoop QA commented on HDFS-9157:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 11s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 23s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  9s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 136m  2s | Tests failed in hadoop-hdfs. |
| | | 182m 11s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestCheckpoint |
|   | org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765581/HDFS-9157_1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12862/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12862/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12862/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12862/console |


This message was automatically generated.

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: HDFS-9181.004.patch

Submitted the wrong patch. This one is correct.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch, 
> HDFS-9181.004.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948805#comment-14948805
 ] 

Charlie Helin commented on HDFS-9110:
-

Thanks [~andrew.wang]!

Yes it was not expected to change the imports will address also a good idea of 
setting depth of 1 will incorporate these suggestions.

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: In Progress  (was: Patch Available)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch, HDFS-9110.07.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Attachment: HDFS-9110.07.patch

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch, HDFS-9110.07.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: Patch Available  (was: In Progress)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch, HDFS-9110.07.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: In Progress  (was: Patch Available)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2015-10-08 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HDFS-9213:
--
Status: Patch Available  (was: Open)

> Minicluster with Kerberos generates some stacks when checking the ports
> ---
>
> Key: HDFS-9213
> URL: https://issues.apache.org/jira/browse/HDFS-9213
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-9213.v1.patch
>
>
> When using the minicluster with kerberos the various checks in 
> SecureDataNodeStarter fail because the ports are not fixed.
> Stacks like this one:
> {quote}
> java.lang.RuntimeException: Unable to bind on specified streaming port in 
> secure context. Needed 0, got 49670
>   at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
> {quote}
> There is already a setting to desactivate this type of check for testing, it 
> could be used here as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-08 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-1172:
---
Attachment: HDFS-1172.012.patch

I update the patch.
* addressed the failure of TestRecoverStripedFile: fixed to avoid updating 
pendingReplications if file is striped.
* added calling to {{DataNodeTestUtils#triggerHeartbeat}} in order to make sure 
{{TestReplication#testNoExtraReplicationWhenBlockReceivedIsLate}} fails without 
the fix of BlockManager.
* fixed checkstyle warning except for file length.
* fixed whitespace error.
* release audit is not related to the fix.
* failure of TestBlockReport and TestCheckpoint is not related to the code path 
of the patch. I could not reproduce the failure on my env.

> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948944#comment-14948944
 ] 

Hadoop QA commented on HDFS-8630:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 41s | Pre-patch trunk has 748 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   9m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 44s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 56s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  3s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 22s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 42s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 229m 55s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 32s | Tests passed in 
hadoop-hdfs-client. |
| | | 289m 10s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.fs.TestSWebHdfsFileContextMainOperations |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765573/HDFS-8630.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12861/console |


This message was automatically generated.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2015-10-08 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HDFS-9213:
--
Attachment: hdfs-9213.v1.patch

> Minicluster with Kerberos generates some stacks when checking the ports
> ---
>
> Key: HDFS-9213
> URL: https://issues.apache.org/jira/browse/HDFS-9213
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-9213.v1.patch
>
>
> When using the minicluster with kerberos the various checks in 
> SecureDataNodeStarter fail because the ports are not fixed.
> Stacks like this one:
> {quote}
> java.lang.RuntimeException: Unable to bind on specified streaming port in 
> secure context. Needed 0, got 49670
>   at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
> {quote}
> There is already a setting to desactivate this type of check for testing, it 
> could be used here as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948949#comment-14948949
 ] 

Xiao Chen commented on HDFS-8164:
-

Thanks Yongjun for the review and commit.
Thanks Chris for reporting this issue and Vinayakumar for the review.

> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2015-10-08 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HDFS-9213:
-

 Summary: Minicluster with Kerberos generates some stacks when 
checking the ports
 Key: HDFS-9213
 URL: https://issues.apache.org/jira/browse/HDFS-9213
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 3.0.0


When using the minicluster with kerberos the various checks in 
SecureDataNodeStarter fail because the ports are not fixed.

Stacks like this one:
{quote}
java.lang.RuntimeException: Unable to bind on specified streaming port in 
secure context. Needed 0, got 49670
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
{quote}

There is already a setting to desactivate this type of check for testing, it 
could be used here as well




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9140) Discover conf parameters that need no DataNode restart to make changes effective

2015-10-08 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9140:

Summary: Discover conf parameters that need no DataNode restart to make 
changes effective  (was: Discover conf parameters that need no NN/DN restart to 
make changes effective)

> Discover conf parameters that need no DataNode restart to make changes 
> effective
> 
>
> Key: HDFS-9140
> URL: https://issues.apache.org/jira/browse/HDFS-9140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This JIRA is to find those parameters that need NN/DN restart in order to 
> make the changes, otherwise, using admin facility API to reconfigure them. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9140) Discover conf parameters that need no DataNode restart to make changes effective

2015-10-08 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949042#comment-14949042
 ] 

Xiaobing Zhou commented on HDFS-9140:
-

By an evaluation, these DN parameters can be reconfigured on the fly with less 
risk.
{code}
dfs.datanode.balance.max.concurrent.moves
dfs.datanode.lazywriter.interval.sec
dfs.datanode.ram.disk.low.watermark.percent
dfs.datanode.ram.disk.low.watermark.bytes
dfs.datanode.duplicate.replica.deletion
{code}

> Discover conf parameters that need no DataNode restart to make changes 
> effective
> 
>
> Key: HDFS-9140
> URL: https://issues.apache.org/jira/browse/HDFS-9140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This JIRA is to find those parameters that need NN/DN restart in order to 
> make the changes, otherwise, using admin facility API to reconfigure them. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-10-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9085:

Hadoop Flags: Incompatible change,Reviewed
Target Version/s: 3.0.0
 Component/s: security

Hello [~zxu].  Sorry for the delayed response.  I am +1 for a commit to trunk 
only, flagging it as backwards-incompatible, and entering a release note that 
describes the change in output for {{hdfs fetchdt --print}}.

Before we commit, do you think you can add a unit test to check the new 
{{toString}} output?

> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer info after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8704) Erasure Coding: client fails to write large file when one datanode fails

2015-10-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949071#comment-14949071
 ] 

Zhe Zhang commented on HDFS-8704:
-

I think we can close this JIRA since the problem is addressed by HDFS-9040?

> Erasure Coding: client fails to write large file when one datanode fails
> 
>
> Key: HDFS-8704
> URL: https://issues.apache.org/jira/browse/HDFS-8704
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8704-000.patch, HDFS-8704-HDFS-7285-002.patch, 
> HDFS-8704-HDFS-7285-003.patch, HDFS-8704-HDFS-7285-004.patch, 
> HDFS-8704-HDFS-7285-005.patch, HDFS-8704-HDFS-7285-006.patch, 
> HDFS-8704-HDFS-7285-007.patch, HDFS-8704-HDFS-7285-008.patch
>
>
> I test current code on a 5-node cluster using RS(3,2).  When a datanode is 
> corrupt, client succeeds to write a file smaller than a block group but fails 
> to write a large one. {{TestDFSStripeOutputStreamWithFailure}} only tests 
> files smaller than a block group, this jira will add more test situations.
> A streamer may encounter some bad datanodes when writing blocks allocated to 
> it. When it fails to connect datanode or send a packet, the streamer needs to 
> prepare for the next block. First it removes the packets of current  block 
> from its data queue. If the first packet of next block has already been in 
> the data queue, the streamer will reset its state and start to wait for the 
> next block allocated for it; otherwise it will just wait for the first packet 
> of next block. The streamer will check periodically if it is asked to 
> terminate during its waiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949180#comment-14949180
 ] 

Hadoop QA commented on HDFS-9181:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m  1s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m  8s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 42s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 42s | The applied patch generated  2 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 37s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 197m 56s | Tests failed in hadoop-hdfs. |
| | | 249m 36s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestWebHdfsFileContextMainOperations |
|   | hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765606/HDFS-9181.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12865/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12865/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12865/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12865/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12865/console |


This message was automatically generated.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch, 
> HDFS-9181.004.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9184) Logging HDFS operation's caller context into audit logs

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9184:

Attachment: HDFS-9184.001.patch

Thanks all for the input.

Before we have a perfect solution, we consider this approach a feasible option 
for the heavily needed goal. In terms of security, it seems flawed. There is a 
signature field when building the caller context which may be useful for the 
offline analysis and validation.

The v1 patch aims to address the incompatible concern. We don't think there is 
"significant compatibility" issue here. Specially,
* We won't record the caller context unless its config key is explicitly turned 
on by users
* NO existing API is changed to implement this feature
* The current layout of the audit log is not changed as there will be an 
*optional* kvp in the end of the line.
Just for the record: it's good to make audit log itself have well-defined 
structure and format in the future. 

As using {{htrace}}, which depends on 100% sampling across many spans, is 
totally different from this approach, this patch does not adopt it. If 
performance problem is really a concern, I don't expect {{htrace}} can do 
better.

> Logging HDFS operation's caller context into audit logs
> ---
>
> Key: HDFS-9184
> URL: https://issues.apache.org/jira/browse/HDFS-9184
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9184.000.patch, HDFS-9184.001.patch
>
>
> For a given HDFS operation (e.g. delete file), it's very helpful to track 
> which upper level job issues it. The upper level callers may be specific 
> Oozie tasks, MR jobs, and hive queries. One scenario is that the namenode 
> (NN) is abused/spammed, the operator may want to know immediately which MR 
> job should be blamed so that she can kill it. To this end, the caller context 
> contains at least the application-dependent "tracking id".
> There are several existing techniques that may be related to this problem.
> 1. Currently the HDFS audit log tracks the users of the the operation which 
> is obviously not enough. It's common that the same user issues multiple jobs 
> at the same time. Even for a single top level task, tracking back to a 
> specific caller in a chain of operations of the whole workflow (e.g.Oozie -> 
> Hive -> Yarn) is hard, if not impossible.
> 2. HDFS integrated {{htrace}} support for providing tracing information 
> across multiple layers. The span is created in many places interconnected 
> like a tree structure which relies on offline analysis across RPC boundary. 
> For this use case, {{htrace}} has to be enabled at 100% sampling rate which 
> introduces significant overhead. Moreover, passing additional information 
> (via annotations) other than span id from root of the tree to leaf is a 
> significant additional work.
> 3. In [HDFS-4680 | https://issues.apache.org/jira/browse/HDFS-4680], there 
> are some related discussion on this topic. The final patch implemented the 
> tracking id as a part of delegation token. This protects the tracking 
> information from being changed or impersonated. However, kerberos 
> authenticated connections or insecure connections don't have tokens. 
> [HADOOP-8779] proposes to use tokens in all the scenarios, but that might 
> mean changes to several upstream projects and is a major change in their 
> security implementation.
> We propose another approach to address this problem. We also treat HDFS audit 
> log as a good place for after-the-fact root cause analysis. We propose to put 
> the caller id (e.g. Hive query id) in threadlocals. Specially, on client side 
> the threadlocal object is passed to NN as a part of RPC header (optional), 
> while on sever side NN retrieves it from header and put it to {{Handler}}'s 
> threadlocals. Finally in {{FSNamesystem}}, HDFS audit logger will record the 
> caller context for each operation. In this way, the existing code is not 
> affected.
> It is still challenging to keep "lying" client from abusing the caller 
> context. Our proposal is to add a {{signature}} field to the caller context. 
> The client choose to provide its signature along with the caller id. The 
> operator may need to validate the signature at the time of offline analysis. 
> The NN is not responsible for validating the signature online.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-10-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9079:

Attachment: HDFS-9079.01.patch

Rebased the patch on top of HDFS-9040 and made it more complete. It's still a 
WIP. The main logic is:
# As described [above | 
https://issues.apache.org/jira/browse/HDFS-9079?focusedCommentId=14905503=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14905503],
 redesigned the coordinator to be an event processing daemon.
# Limiting the lifespan of {{StripedDataStreamer}} to a single block. This is 
to simplify the logic.
# Preallocating GS by groups, so GS bumping can be processed locally by the 
coordinator.

I also modified {{TestWriteStripedFileWithFailure}} to be a minimum error 
handling test -- it writes a small file (< 1 block), and there's 1 failure 
during the write. The patch passes the test and the sequence of events is as 
expected. I'm now working on more complex tests including 
{{TestDFSStripedOutputStreamWithFailure}}.

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9079-HDFS-7285.00.patch, HDFS-9079.01.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped 
> block group ({{FSN#createNewBlock}}). For each new striped block group we can 
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can 
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we 
> shouldn't try to further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9216) Fix RAT licensing issues

2015-10-08 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9216:
-
Description: 
The following files in HDFS have license issues:
{{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h}}

  was:
The following files in HDFS have license issues:
{{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h}}
{{hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java}}


> Fix RAT licensing issues
> 
>
> Key: HDFS-9216
> URL: https://issues.apache.org/jira/browse/HDFS-9216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
>
> The following files in HDFS have license issues:
> {{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9216) Fix RAT licensing issues

2015-10-08 Thread Eric Payne (JIRA)
Eric Payne created HDFS-9216:


 Summary: Fix RAT licensing issues
 Key: HDFS-9216
 URL: https://issues.apache.org/jira/browse/HDFS-9216
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Eric Payne
Assignee: Eric Payne
Priority: Minor


The following files in HDFS have license issues:
{{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h}}
{{hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9215) Suppress the RAT warnings in hdfs-native-client module

2015-10-08 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949511#comment-14949511
 ] 

Eric Payne commented on HDFS-9215:
--

[~wheat9] and [~andrew.wang], I'm pretty sure we also have to change 
LICENSE.txt.

> Suppress the RAT warnings in hdfs-native-client module
> --
>
> Key: HDFS-9215
> URL: https://issues.apache.org/jira/browse/HDFS-9215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-9215.000.patch
>
>
> HDFS-9170 moves the native client implementation to the hdfs-native-client 
> module. This is a follow-up jira to suppress the RAT warning that was 
> suppressed in the original hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9215) Suppress the RAT warnings in hdfs-native-client module

2015-10-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9215:
-
Attachment: HDFS-9215.001.patch

> Suppress the RAT warnings in hdfs-native-client module
> --
>
> Key: HDFS-9215
> URL: https://issues.apache.org/jira/browse/HDFS-9215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-9215.000.patch, HDFS-9215.001.patch
>
>
> HDFS-9170 moves the native client implementation to the hdfs-native-client 
> module. This is a follow-up jira to suppress the RAT warning that was 
> suppressed in the original hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949569#comment-14949569
 ] 

Hudson commented on HDFS-9204:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #509 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/509/])
HDFS-9204. DatanodeDescriptor#PendingReplicationWithoutTargets is (jing9: rev 
118a35bc2eabe3918b4797a1b626e9a39d77754b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Fix For: 3.0.0
>
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949618#comment-14949618
 ] 

Hudson commented on HDFS-9204:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1237 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1237/])
HDFS-9204. DatanodeDescriptor#PendingReplicationWithoutTargets is (jing9: rev 
118a35bc2eabe3918b4797a1b626e9a39d77754b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Fix For: 3.0.0
>
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949316#comment-14949316
 ] 

Hadoop QA commented on HDFS-9110:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 48s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 45s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 22s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 40s | The applied patch generated  
12 new checkstyle issues (total was 2, now 13). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 39s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 54s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 42s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 230m 37s | Tests failed in hadoop-hdfs. |
| | | 283m 22s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestFSNamesystem |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765618/HDFS-9110.07.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12867/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12867/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12867/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12867/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12867/console |


This message was automatically generated.

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch, HDFS-9110.07.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9216) Fix RAT licensing issues

2015-10-08 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne resolved HDFS-9216.
--
Resolution: Duplicate

> Fix RAT licensing issues
> 
>
> Key: HDFS-9216
> URL: https://issues.apache.org/jira/browse/HDFS-9216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
>
> The following files in HDFS have license issues:
> {{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9207) Move the implementation to the hdfs-native-client module

2015-10-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949516#comment-14949516
 ] 

Mingliang Liu commented on HDFS-9207:
-

Hi [~wheat9], I'm not now working on this. Don't get blocked. I'll assign it to 
you. Thank you.

> Move the implementation to the hdfs-native-client module
> 
>
> Key: HDFS-9207
> URL: https://issues.apache.org/jira/browse/HDFS-9207
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> The implementation of libhdfspp should be moved to the new hdfs-native-client 
> module as HDFS-9170 has landed in trunk and branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9145) Tracking methods that hold FSNamesytemLock for too long

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9145:

Status: Open  (was: Patch Available)

> Tracking methods that hold FSNamesytemLock for too long
> ---
>
> Key: HDFS-9145
> URL: https://issues.apache.org/jira/browse/HDFS-9145
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9145.000.patch, HDFS-9145.001.patch
>
>
> It will be helpful that if we can have a way to track (or at least log a msg) 
> if some operation is holding the FSNamesystem lock for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9145) Tracking methods that hold FSNamesytemLock for too long

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9145:

Status: Patch Available  (was: Open)

> Tracking methods that hold FSNamesytemLock for too long
> ---
>
> Key: HDFS-9145
> URL: https://issues.apache.org/jira/browse/HDFS-9145
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9145.000.patch, HDFS-9145.001.patch
>
>
> It will be helpful that if we can have a way to track (or at least log a msg) 
> if some operation is holding the FSNamesystem lock for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949415#comment-14949415
 ] 

Hadoop QA commented on HDFS-9213:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 46s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 39s | The applied patch generated  3 
new checkstyle issues (total was 7, now 10). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 47s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 34s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 238m 17s | Tests failed in hadoop-hdfs. |
| | | 290m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestDFSClientRetries |
| Timed out tests | org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765623/hdfs-9213.v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1107bd3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12868/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12868/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12868/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12868/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12868/console |


This message was automatically generated.

> Minicluster with Kerberos generates some stacks when checking the ports
> ---
>
> Key: HDFS-9213
> URL: https://issues.apache.org/jira/browse/HDFS-9213
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-9213.v1.patch
>
>
> When using the minicluster with kerberos the various checks in 
> SecureDataNodeStarter fail because the ports are not fixed.
> Stacks like this one:
> {quote}
> java.lang.RuntimeException: Unable to bind on specified streaming port in 
> secure context. Needed 0, got 49670
>   at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
> {quote}
> There is already a setting to desactivate this type of check for testing, it 
> could be used here as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-10-08 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949472#comment-14949472
 ] 

zhihai xu commented on HDFS-9085:
-

Thanks for the good suggestion [~cnauroth]! Yes, I uploaded a new patch 
HDFS-9085.002.patch, which added a unit test to verify {{toString}}.

> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch, HDFS-9085.002.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer info after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 branch-2 backport

2015-10-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949481#comment-14949481
 ] 

Haohui Mai commented on HDFS-9211:
--

Thanks for reporting! I just opened HDFS-9215 and submitted a patch.

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 
> branch-2 backport
> ---
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.0
>
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9215) Suppress the RAT warnings in hdfs-native-client module

2015-10-08 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949514#comment-14949514
 ] 

Eric Payne commented on HDFS-9215:
--

For example, something like the following should be added to LICENSE.txt:
{noformat}
For 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h

/*  $NetBSD: tree.h,v 1.8 2004/03/28 19:38:30 provos Exp $  */
/*  $OpenBSD: tree.h,v 1.7 2002/10/17 21:51:54 art Exp $*/
/* $FreeBSD: src/sys/sys/tree.h,v 1.9.4.1 2011/09/23 00:51:37 kensmith Exp $ */

/*-
 * Copyright 2002 Niels Provos 
 * All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
...
{noformat}

> Suppress the RAT warnings in hdfs-native-client module
> --
>
> Key: HDFS-9215
> URL: https://issues.apache.org/jira/browse/HDFS-9215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-9215.000.patch
>
>
> HDFS-9170 moves the native client implementation to the hdfs-native-client 
> module. This is a follow-up jira to suppress the RAT warning that was 
> suppressed in the original hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9142) Namenode Http address is not configured correctly for federated cluster in MiniDFSCluster

2015-10-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949559#comment-14949559
 ] 

Hadoop QA commented on HDFS-9142:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   7m 57s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 28s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  4s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 189m 49s | Tests failed in hadoop-hdfs. |
| | | 213m 10s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestBpServiceActorScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765652/HDFS-9142.v7.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 0841940 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12870/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12870/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12870/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12870/console |


This message was automatically generated.

> Namenode Http address is not configured correctly for federated cluster in 
> MiniDFSCluster
> -
>
> Key: HDFS-9142
> URL: https://issues.apache.org/jira/browse/HDFS-9142
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
> Attachments: HDFS-9142.v1.patch, HDFS-9142.v2.patch, 
> HDFS-9142.v3.patch, HDFS-9142.v4.patch, HDFS-9142.v5.patch, 
> HDFS-9142.v6.patch, HDFS-9142.v7.patch
>
>
> When setting up simpleHAFederatedTopology in MiniDFSCluster, each Namenode 
> should have its own configuration object, and the configuration should have 
> "dfs.namenode.http-address--" set up correctly for 
> all  pair



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9217) Fix broken findbugsExcludeFile.xml for hadoop-hdfs-client module

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9217:

Status: Patch Available  (was: Open)

> Fix broken findbugsExcludeFile.xml for hadoop-hdfs-client module
> 
>
> Key: HDFS-9217
> URL: https://issues.apache.org/jira/browse/HDFS-9217
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-9217.000.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The {{findbugsExcludeFile.xml}} file was broken and the findbugs complains as 
> follows:
> {code}
> [INFO] 
> 
> [INFO] Building Apache Hadoop HDFS Client 3.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO]
> [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
> hadoop-hdfs-client ---
> [INFO] Fork Value is true
>  [java] The following errors occurred during analysis:
>  [java]   Unable to read filter: 
> /Users/mliu/Workspace/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/findbugsExcludeFile.xml
>  : The value of attribute "name" associated with an element type "Class" must 
> not contain the '<' character.
>  [java] java.io.IOException: The value of attribute "name" associated 
> with an element type "Class" must not contain the '<' character.
>  [java]   At edu.umd.cs.findbugs.filter.Filter.(Filter.java:134)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs.configureFilter(FindBugs.java:516)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.addFilter(FindBugs2.java:374)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.configureFilters(FindBugs2.java:521)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.setUserPreferences(FindBugs2.java:475)
>  [java]   At 
> edu.umd.cs.findbugs.TextUICommandLine.configureEngine(TextUICommandLine.java:685)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs.processCommandLine(FindBugs.java:361)
>  [java]   At edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1188)
>  [java] Warnings generated: 748
> [INFO] Done FindBugs Analysis
> {code}
> The reason is that in [HDFS-9182], the newly added lines contain “ character, 
> which should be ".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9217) Fix broken findbugsExcludeFile.xml for hadoop-hdfs-client module

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9217:

Component/s: build

> Fix broken findbugsExcludeFile.xml for hadoop-hdfs-client module
> 
>
> Key: HDFS-9217
> URL: https://issues.apache.org/jira/browse/HDFS-9217
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-9217.000.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The {{findbugsExcludeFile.xml}} file was broken and the findbugs complains as 
> follows:
> {code}
> [INFO] 
> 
> [INFO] Building Apache Hadoop HDFS Client 3.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO]
> [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
> hadoop-hdfs-client ---
> [INFO] Fork Value is true
>  [java] The following errors occurred during analysis:
>  [java]   Unable to read filter: 
> /Users/mliu/Workspace/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/findbugsExcludeFile.xml
>  : The value of attribute "name" associated with an element type "Class" must 
> not contain the '<' character.
>  [java] java.io.IOException: The value of attribute "name" associated 
> with an element type "Class" must not contain the '<' character.
>  [java]   At edu.umd.cs.findbugs.filter.Filter.(Filter.java:134)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs.configureFilter(FindBugs.java:516)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.addFilter(FindBugs2.java:374)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.configureFilters(FindBugs2.java:521)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.setUserPreferences(FindBugs2.java:475)
>  [java]   At 
> edu.umd.cs.findbugs.TextUICommandLine.configureEngine(TextUICommandLine.java:685)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs.processCommandLine(FindBugs.java:361)
>  [java]   At edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1188)
>  [java] Warnings generated: 748
> [INFO] Done FindBugs Analysis
> {code}
> The reason is that in [HDFS-9182], the newly added lines contain “ character, 
> which should be ".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949602#comment-14949602
 ] 

Hudson commented on HDFS-9204:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #500 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/500/])
HDFS-9204. DatanodeDescriptor#PendingReplicationWithoutTargets is (jing9: rev 
118a35bc2eabe3918b4797a1b626e9a39d77754b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java


> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Fix For: 3.0.0
>
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Attachment: HDFS-9110.08.patch

Fixed check style issuess

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch, HDFS-9110.07.patch, HDFS-9110.08.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-10-08 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HDFS-9085:

Attachment: HDFS-9085.002.patch

> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch, HDFS-9085.002.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer info after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9207) Move the implementation to the hdfs-native-client module

2015-10-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949485#comment-14949485
 ] 

Haohui Mai commented on HDFS-9207:
--

[~liuml07], are you actively working on this? I'd like to get this landed asap 
as it blocks the developments of the branch.

Can you please assign it to me if you don't have time?

> Move the implementation to the hdfs-native-client module
> 
>
> Key: HDFS-9207
> URL: https://issues.apache.org/jira/browse/HDFS-9207
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
>
> The implementation of libhdfspp should be moved to the new hdfs-native-client 
> module as HDFS-9170 has landed in trunk and branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9217) Fix broken findbugsExcludeFile.xml for hadoop-hdfs-client module

2015-10-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9217:

Attachment: HDFS-9217.000.patch

> Fix broken findbugsExcludeFile.xml for hadoop-hdfs-client module
> 
>
> Key: HDFS-9217
> URL: https://issues.apache.org/jira/browse/HDFS-9217
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-9217.000.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The {{findbugsExcludeFile.xml}} file was broken and the findbugs complains as 
> follows:
> {code}
> [INFO] 
> 
> [INFO] Building Apache Hadoop HDFS Client 3.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO]
> [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
> hadoop-hdfs-client ---
> [INFO] Fork Value is true
>  [java] The following errors occurred during analysis:
>  [java]   Unable to read filter: 
> /Users/mliu/Workspace/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/findbugsExcludeFile.xml
>  : The value of attribute "name" associated with an element type "Class" must 
> not contain the '<' character.
>  [java] java.io.IOException: The value of attribute "name" associated 
> with an element type "Class" must not contain the '<' character.
>  [java]   At edu.umd.cs.findbugs.filter.Filter.(Filter.java:134)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs.configureFilter(FindBugs.java:516)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.addFilter(FindBugs2.java:374)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.configureFilters(FindBugs2.java:521)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs2.setUserPreferences(FindBugs2.java:475)
>  [java]   At 
> edu.umd.cs.findbugs.TextUICommandLine.configureEngine(TextUICommandLine.java:685)
>  [java]   At 
> edu.umd.cs.findbugs.FindBugs.processCommandLine(FindBugs.java:361)
>  [java]   At edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1188)
>  [java] Warnings generated: 748
> [INFO] Done FindBugs Analysis
> {code}
> The reason is that in [HDFS-9182], the newly added lines contain “ character, 
> which should be ".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9070) Allow fsck display pending replica location information for being-written blocks

2015-10-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949302#comment-14949302
 ] 

Jing Zhao commented on HDFS-9070:
-

Thanks for updating the patch, [~demongaorui]. The new patch looks good to me. 
But I think currently we do not need to support showRacks or showReplicaDetails 
for a UC block. +1 after addressing the comments.

> Allow fsck display pending replica location information for being-written 
> blocks
> 
>
> Key: HDFS-9070
> URL: https://issues.apache.org/jira/browse/HDFS-9070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-9070--HDFS-7285.00.patch, 
> HDFS-9070-HDFS-7285.00.patch, HDFS-9070-HDFS-7285.01.patch, 
> HDFS-9070-HDFS-7285.02.patch, HDFS-9070-trunk.03.patch
>
>
> When a EC file is being written, it can be helpful to allow fsck display 
> datanode information of the being-written EC file block group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-08 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: Open  (was: Patch Available)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, 
> HDFS-9110.05.patch, HDFS-9110.06.patch, HDFS-9110.07.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9214) Reconfigure DN concurrent move on the fly

2015-10-08 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9214:

Affects Version/s: 2.7.0

> Reconfigure DN concurrent move on the fly
> -
>
> Key: HDFS-9214
> URL: https://issues.apache.org/jira/browse/HDFS-9214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is to reconfigure
> {code}
> dfs.datanode.balance.max.concurrent.moves
> {code} without restarting DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9215) Suppress the RAT warnings in hdfs-native-client module

2015-10-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949505#comment-14949505
 ] 

Andrew Wang commented on HDFS-9215:
---

+1 LGTM thanks Haohui

> Suppress the RAT warnings in hdfs-native-client module
> --
>
> Key: HDFS-9215
> URL: https://issues.apache.org/jira/browse/HDFS-9215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-9215.000.patch
>
>
> HDFS-9170 moves the native client implementation to the hdfs-native-client 
> module. This is a follow-up jira to suppress the RAT warning that was 
> suppressed in the original hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >