[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with sticky bit

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268138#comment-15268138
 ] 

Hadoop QA commented on HDFS-10324:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s 
{color} | {color:red} root: patch generated 3 new + 11 unchanged - 3 fixed = 14 
total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 10s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 40s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 15s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-02 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268124#comment-15268124
 ] 

Li Bo commented on HDFS-8449:
-

Update patch 007 to fix some minor problems.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-9758) libhdfs++: Implement Python bindings

2016-05-02 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss reassigned HDFS-9758:


Assignee: Tibor Kiss

> libhdfs++: Implement Python bindings
> 
>
> Key: HDFS-9758
> URL: https://issues.apache.org/jira/browse/HDFS-9758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Tibor Kiss
> Attachments: hdfs_posix.py
>
>
> It'd be really useful to have bindings for various scripting languages.  
> Python would be a good start because of it's popularity and how easy it is to 
> interact with shared libraries using the ctypes module.  I think bindings for 
> the V8 engine that nodeJS uses would be a close second in terms of expanding 
> the potential user base.
> Probably worth starting with just adding a synchronous API and building from 
> there to avoid interactions with python's garbage collector until the 
> bindings prove to be solid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10358) Refactor EncryptionZone and EncryptionZoneInt

2016-05-02 Thread Lin Yiqun (JIRA)
Lin Yiqun created HDFS-10358:


 Summary: Refactor EncryptionZone and EncryptionZoneInt
 Key: HDFS-10358
 URL: https://issues.apache.org/jira/browse/HDFS-10358
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.7.1
Reporter: Lin Yiqun
Assignee: Lin Yiqun


Between class {{EncryptionZone}} and {{EncryptionZoneInt}}, they can reuse  
most of fields in {{EncryptionZoneInt}}. We can refactor them to improve that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268073#comment-15268073
 ] 

Hadoop QA commented on HDFS-9902:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 18s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 201m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | 
hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7

2016-05-02 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268067#comment-15268067
 ] 

He Xiaoqiao commented on HDFS-8718:
---

i met the same problem after upgraded the cluster to 2.7.1, and never config 
ARCHIVE storage policy.

{code:xml}
2016-04-19 10:20:48,083 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more 
information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
...
2016-04-19 10:21:17,184 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
place enough replicas, still in need of 7 to reach 10 
(unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
newBlock=false) For more information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-04-19 10:21:17,184 WARN 
org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
replicas: expected size is 7 but only 0 storage types can be selected 
(replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, DISK, 
DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2016-04-19 10:21:17,184 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
place enough replicas, still in need of 7 to reach 10 
(unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
newBlock=false) All required storage types are unavailable:  
unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
{code}

above NN log is all about ReplicationMonitor process one {{ReplicationWork}}, 
it depicts that {{ReplicationWork}} can not choose any proper targets which 
StorageType should be DISK although traverse all DN of the Cluster. then 
{{DISK}} is added to {{unavailableStorages}}, the next loop {{ARCHIVE}} is 
added to {{unavailableStorages}} because there is no ARCHIVE storage. After 
that throw NotEnoughReplicasException.

The core Question is *WHY it can NOT choose any proper datanode as target in 
{{ReplicationWork}} successfully, even if there are thousand DNs in the 
cluster*.

> Block replicating cannot work after upgrading to 2.7 
> -
>
> Key: HDFS-8718
> URL: https://issues.apache.org/jira/browse/HDFS-8718
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Bing Jiang
>
> Decommission a datanode from hadoop, and hdfs can calculate the correct 
> number of  blocks to be replicated from web-ui. 
> {code}
> Decomissioning
> Node  Last contactUnder replicated blocks Blocks with no live replicas
> Under Replicated Blocks 
> In files under construction
> TS-BHTEST-03:50010 (172.22.49.3:50010)25641   0   0
> {code}
> From NN's log, the work of block replicating cannot be enforced due to 
> inconsistent expected storage type.
> {code}
> Node /default/rack_02/172.22.49.5:50010 [
>   Storage 
> [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> 

[jira] [Updated] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-02 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8449:

Attachment: HDFS-8449-007.patch

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10344) DistributedFileSystem#getTrashRoots should skip encryption zone that does not have .Trash

2016-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268036#comment-15268036
 ] 

Hudson commented on HDFS-10344:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9704 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9704/])
HDFS-10344. DistributedFileSystem#getTrashRoots should skip encryption (xyao: 
rev 45a753ccf79d334513c7bc8f2b81c89a4697075d)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java


> DistributedFileSystem#getTrashRoots should skip encryption zone that does not 
> have .Trash
> -
>
> Key: HDFS-10344
> URL: https://issues.apache.org/jira/browse/HDFS-10344
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-10344.00.patch
>
>
> HDFS-8831 added trash support for encryption zones. For encryption zone that 
> does not have .Trash created yet, DistributedFileSystem#getTrashRoots should 
> skip rather than call listStatus() and break by FileNotFoundException. I will 
> post a patch for the fix shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6520) hdfs fsck -move passes invalid length value when creating BlockReader

2016-05-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-6520:

Labels: supportability  (was: )

> hdfs fsck -move passes invalid length value when creating BlockReader
> -
>
> Key: HDFS-6520
> URL: https://issues.apache.org/jira/browse/HDFS-6520
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Shengjun Xin
>Assignee: Xiao Chen
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-6520-partial.001.patch, HDFS-6520.01.patch, 
> HDFS-6520.02.patch, john.test.patch
>
>
> I met some error when I run fsck -move.
> My steps are as the following:
> 1. Set up a pseudo cluster
> 2. Copy a file to hdfs
> 3. Corrupt a block of the file
> 4. Run fsck to check:
> {code}
> Connecting to namenode via http://localhost:50070
> FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at 
> Wed Jun 11 15:58:38 CST 2014
> .
> /user/hadoop/fsck-test: CORRUPT blockpool 
> BP-654596295-10.37.7.84-1402466764642 block blk_1073741825
> /user/hadoop/fsck-test: MISSING 1 blocks of total size 1048576 B.Status: 
> CORRUPT
>  Total size:4104304 B
>  Total dirs:1
>  Total files:   1
>  Total symlinks:0
>  Total blocks (validated):  4 (avg. block size 1026076 B)
>   
>   CORRUPT FILES:1
>   MISSING BLOCKS:   1
>   MISSING SIZE: 1048576 B
>   CORRUPT BLOCKS:   1
>   
>  Minimally replicated blocks:   3 (75.0 %)
>  Over-replicated blocks:0 (0.0 %)
>  Under-replicated blocks:   0 (0.0 %)
>  Mis-replicated blocks: 0 (0.0 %)
>  Default replication factor:1
>  Average block replication: 0.75
>  Corrupt blocks:1
>  Missing replicas:  0 (0.0 %)
>  Number of data-nodes:  1
>  Number of racks:   1
> FSCK ended at Wed Jun 11 15:58:38 CST 2014 in 1 milliseconds
> The filesystem under path '/user/hadoop' is CORRUPT
> {code}
> 5. Run fsck -move to move the corrupted file to /lost+found and the error 
> message in the namenode log:
> {code}
> 2014-06-11 15:48:16,686 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at 
> Wed Jun 11 15:48:16 CST 2014
> 2014-06-11 15:48:16,894 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 35 
> Total time for transactions(ms): 9 Number of transactions batched in Syncs: 0 
> Number of syncs: 25 SyncTimes(ms): 73
> 2014-06-11 15:48:16,991 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Error reading block
> java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader 
> with packetLen=66048 header data: offsetInBlock: 65536
> seqno: 1
> lastPacketInBlock: false
> dataLen: 65536
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.readTrailingEmptyPacket(RemoteBlockReader2.java:259)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:220)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:138)
> at 
> org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:649)
> at 
> org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:543)
> at 
> org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:460)
> at 
> org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:324)
> at 
> org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:233)
> at 
> org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at 
> org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1192)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> 

[jira] [Updated] (HDFS-9721) Allow Delimited PB OIV tool to run upon fsimage that contains INodeReference

2016-05-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9721:

Labels: supportability  (was: )

> Allow Delimited PB OIV tool to run upon fsimage that contains INodeReference
> 
>
> Key: HDFS-9721
> URL: https://issues.apache.org/jira/browse/HDFS-9721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>  Labels: supportability
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9721.01.patch, HDFS-9721.02.patch, 
> HDFS-9721.03.patch, HDFS-9721.04.patch, HDFS-9721.05.patch
>
>
> HDFS-6673 added the feature of Delimited format OIV tool on protocol buffer 
> based fsimage.
> However, if the fsimage contains {{INodeReference}}, the tool fails because:
> {code}Preconditions.checkState(e.getRefChildrenCount() == 0);{code}
> This jira is to propose allow the tool to finish, so that user can get full 
> metadata.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9405) Warmup NameNode EDEK caches in background thread

2016-05-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9405:

Labels: supportability  (was: )

> Warmup NameNode EDEK caches in background thread
> 
>
> Key: HDFS-9405
> URL: https://issues.apache.org/jira/browse/HDFS-9405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Xiao Chen
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9405.01.patch, HDFS-9405.02.patch, 
> HDFS-9405.03.patch, HDFS-9405.04.patch, HDFS-9405.05.patch, 
> HDFS-9405.06.patch, HDFS-9405.07.patch, HDFS-9405.08.patch, 
> HDFS-9405.09.patch, HDFS-9405.10.patch, HDFS-9405.11.patch, 
> HDFS-9405.12.patch, HDFS-9405.13.patch
>
>
> {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation 
> to the key provider, which could be slow or cause timeout. It should be done 
> as a separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10344) DistributedFileSystem#getTrashRoots should skip encryption zone that does not have .Trash

2016-05-02 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10344:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~andrew.wang] for the review. I've committed the fix to trunk, branch-2 
and branch-2.8.

> DistributedFileSystem#getTrashRoots should skip encryption zone that does not 
> have .Trash
> -
>
> Key: HDFS-10344
> URL: https://issues.apache.org/jira/browse/HDFS-10344
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-10344.00.patch
>
>
> HDFS-8831 added trash support for encryption zones. For encryption zone that 
> does not have .Trash created yet, DistributedFileSystem#getTrashRoots should 
> skip rather than call listStatus() and break by FileNotFoundException. I will 
> post a patch for the fix shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10320) Rack failures may result in NN terminate

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267969#comment-15267969
 ] 

Hadoop QA commented on HDFS-10320:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s 
{color} | {color:red} root: patch generated 1 new + 229 unchanged - 4 fixed = 
230 total (was 233) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 28s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 192m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| 

[jira] [Commented] (HDFS-10344) DistributedFileSystem#getTrashRoots should skip encryption zone that does not have .Trash

2016-05-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267960#comment-15267960
 ] 

Xiaoyu Yao commented on HDFS-10344:
---

Thanks [~andrew.wang] for the review. I'll commit the patch shortly. 

> DistributedFileSystem#getTrashRoots should skip encryption zone that does not 
> have .Trash
> -
>
> Key: HDFS-10344
> URL: https://issues.apache.org/jira/browse/HDFS-10344
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10344.00.patch
>
>
> HDFS-8831 added trash support for encryption zones. For encryption zone that 
> does not have .Trash created yet, DistributedFileSystem#getTrashRoots should 
> skip rather than call listStatus() and break by FileNotFoundException. I will 
> post a patch for the fix shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with sticky bit

2016-05-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267959#comment-15267959
 ] 

Xiaoyu Yao commented on HDFS-10324:
---

Thanks [~andrew.wang] and [~jojochuang]. HDFS-10344 is for a different issue. 
Let's have the HdfsAdmin refactor included in HDFS-10324 and I will review it 
when [~jojochuang] have a new patch available. 

> Trash directory in an encryption zone should be pre-created with sticky bit
> ---
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10353) Fix hadoop-hdfs-native-client compilation on Windows

2016-05-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267947#comment-15267947
 ] 

Brahma Reddy Battula commented on HDFS-10353:
-

[~andrew.wang] thanks for commit...

> Fix hadoop-hdfs-native-client compilation on Windows
> 
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10353) Fix hadoop-hdfs-native-client compilation on Windows

2016-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10353:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks again Brahma for the find and fix, committed to trunk.

> Fix hadoop-hdfs-native-client compilation on Windows
> 
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10353) Fix hadoop-hdfs-native-client compilation on Windows

2016-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10353:
---
Summary: Fix hadoop-hdfs-native-client compilation on Windows  (was: 
hadoop-hdfs-native-client compilation failing )

> Fix hadoop-hdfs-native-client compilation on Windows
> 
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10344) DistributedFileSystem#getTrashRoots should skip encryption zone that does not have .Trash

2016-05-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267869#comment-15267869
 ] 

Andrew Wang commented on HDFS-10344:


Thanks Xiaoyu for the find and fix, patch LGTM +1.

> DistributedFileSystem#getTrashRoots should skip encryption zone that does not 
> have .Trash
> -
>
> Key: HDFS-10344
> URL: https://issues.apache.org/jira/browse/HDFS-10344
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10344.00.patch
>
>
> HDFS-8831 added trash support for encryption zones. For encryption zone that 
> does not have .Trash created yet, DistributedFileSystem#getTrashRoots should 
> skip rather than call listStatus() and break by FileNotFoundException. I will 
> post a patch for the fix shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10353) hadoop-hdfs-native-client compilation failing

2016-05-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267872#comment-15267872
 ] 

Andrew Wang commented on HDFS-10353:


Thanks for the find+fix Brahma! Patch LGTM. I do not have a Windows env to 
test, but will trust you on this one :)

> hadoop-hdfs-native-client compilation failing 
> --
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267864#comment-15267864
 ] 

Hadoop QA commented on HDFS-10354:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 50s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 53s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 37s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 32s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 7s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801812/HDFS-10354.HDFS-8707.002.patch
 |
| JIRA Issue | HDFS-10354 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 1359b525a3de 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d187112 |
| Default Java | 1.7.0_95 |
| Multi-JDK 

[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with sticky bit

2016-05-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267848#comment-15267848
 ] 

Wei-Chiu Chuang commented on HDFS-10324:


Yeah I didn't do that. I was under the impression Xiaoyu filed a JIRA for that 
(HDFS-10344), but apparently I was wrong. I can add that part for sure.

> Trash directory in an encryption zone should be pre-created with sticky bit
> ---
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2580) NameNode#main(...) can make use of GenericOptionsParser.

2016-05-02 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2580:
--
Fix Version/s: (was: 3.0.0)
   2.6.5
   2.7.3

Committed this to branch-2, branch-2.8, branch-2.7, branch-2.6.

> NameNode#main(...) can make use of GenericOptionsParser.
> 
>
> Key: HDFS-2580
> URL: https://issues.apache.org/jira/browse/HDFS-2580
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HDFS-2580.patch
>
>
> DataNode supports passing generic opts when calling via {{hdfs datanode}}. 
> NameNode can support the same thing as well, but doesn't right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with sticky bit

2016-05-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267828#comment-15267828
 ] 

Andrew Wang commented on HDFS-10324:


Thanks Wei-Chiu, LGTM overall. I think this is ready for commit, though it 
looks like you didn't do the HdfsAdmin refactor that Xiaoyu asked for?

> Trash directory in an encryption zone should be pre-created with sticky bit
> ---
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267804#comment-15267804
 ] 

Hadoop QA commented on HDFS-9890:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 43s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
24s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 52s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 26m 7s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 41 new + 29 unchanged - 0 fixed = 70 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 31m 4s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_95 with JDK 
v1.7.0_95 generated 41 new + 29 unchanged - 0 fixed = 70 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 49s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 43s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed CTEST tests | 
test_libhdfs_mini_stress_hdfspp_test_shim_static |
| JDK v1.7.0_95 Failed CTEST tests | 
test_libhdfs_mini_stress_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801828/HDFS-9890.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-9890 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 1afce10fed7b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d187112 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 

[jira] [Updated] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9902:
---
Attachment: HDFS-9902-05.patch

Uploading the patch to address the above comments..

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, 
> HDFS-9902-04.patch, HDFS-9902-05.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-02 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267782#comment-15267782
 ] 

Mingliang Liu commented on HDFS-9732:
-

Thanks for the patch. May I ask why the {{StringBuilder}} is preferred (though 
welcomed by above comment) to + operator in this patch? Thanks.

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10345) [umbrella] Implement an asynchronous DistributedFileSystem

2016-05-02 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou resolved HDFS-10345.
--
Resolution: Duplicate

Closed this as a duplicate of HDFS-9924.

> [umbrella] Implement an asynchronous DistributedFileSystem
> --
>
> Key: HDFS-10345
> URL: https://issues.apache.org/jira/browse/HDFS-10345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is proposed to implement an asynchronous DistributedFileSystem based on 
> AsyncFileSystem APIs in HADOOP-12910.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10346) Implement asynchronous setPermission for DistributedFileSystem

2016-05-02 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10346:
-
Parent Issue: HDFS-9924  (was: HDFS-10345)

> Implement asynchronous setPermission for DistributedFileSystem
> --
>
> Key: HDFS-10346
> URL: https://issues.apache.org/jira/browse/HDFS-10346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is proposed to implement an asynchronous setPermission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10350) Implement asynchronous setOwner for DistributedFileSystem

2016-05-02 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10350:
-
Parent Issue: HDFS-9924  (was: HDFS-10345)

> Implement asynchronous setOwner for DistributedFileSystem
> -
>
> Key: HDFS-10350
> URL: https://issues.apache.org/jira/browse/HDFS-10350
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is proposed to implement an asynchronous setOwner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10224) Implement asynchronous rename for DistributedFileSystem

2016-05-02 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10224:
-
Parent Issue: HDFS-9924  (was: HDFS-10345)

> Implement asynchronous rename for DistributedFileSystem
> ---
>
> Key: HDFS-10224
> URL: https://issues.apache.org/jira/browse/HDFS-10224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HDFS-10224-HDFS-9924.000.patch, 
> HDFS-10224-HDFS-9924.001.patch, HDFS-10224-HDFS-9924.002.patch, 
> HDFS-10224-HDFS-9924.003.patch, HDFS-10224-HDFS-9924.004.patch, 
> HDFS-10224-HDFS-9924.005.patch, HDFS-10224-HDFS-9924.006.patch, 
> HDFS-10224-HDFS-9924.007.patch, HDFS-10224-HDFS-9924.008.patch, 
> HDFS-10224-HDFS-9924.009.patch, HDFS-10224-and-HADOOP-12909.000.patch
>
>
> This is proposed to implement an asynchronous rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10345) [umbrella] Implement an asynchronous DistributedFileSystem

2016-05-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267725#comment-15267725
 ] 

Xiaobing Zhou commented on HDFS-10345:
--

[~Apache9] good point, but there will be much work of reversing given that sync 
DFS is firstly implemented for long time.

> [umbrella] Implement an asynchronous DistributedFileSystem
> --
>
> Key: HDFS-10345
> URL: https://issues.apache.org/jira/browse/HDFS-10345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is proposed to implement an asynchronous DistributedFileSystem based on 
> AsyncFileSystem APIs in HADOOP-12910.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267637#comment-15267637
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Hi [~ste...@apache.org], wonder if you could take a look at rev 004? Thank you!


> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10357) Ozone: Replace Jersey container with Netty Container

2016-05-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10357:

Description: In the ozone branch, we have implemented Web Interface calls 
using JAX-RS. This was very useful when the REST interfaces where in flux. This 
JIRA proposes to replace Jersey based code with pure netty and remove any 
dependency that Ozone has on Jersey. This will create both faster and simpler 
code in Ozone web interface.  (was: In the ozone branch, we have implemented 
Web Interface calls using JAX-RS. This was very useful when the REST interfaces 
where in flux. This JIRA proposes to replace Jersey based code with pure netty 
and remove any dependency that Ozone has on Jersey. This will create with both 
faster and simpler code in Ozone web interface.)

> Ozone: Replace Jersey container with Netty Container
> 
>
> Key: HDFS-10357
> URL: https://issues.apache.org/jira/browse/HDFS-10357
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
>
> In the ozone branch, we have implemented Web Interface calls using JAX-RS. 
> This was very useful when the REST interfaces where in flux. This JIRA 
> proposes to replace Jersey based code with pure netty and remove any 
> dependency that Ozone has on Jersey. This will create both faster and simpler 
> code in Ozone web interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.

2016-05-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-10356:
---

Assignee: Anu Engineer

> Ozone: Container server needs enhancements to control of bind address for 
> greater flexibility and testability.
> --
>
> Key: HDFS-10356
> URL: https://issues.apache.org/jira/browse/HDFS-10356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>Assignee: Anu Engineer
>
> The container server, as implemented in class 
> {{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
> currently does not offer the same degree of flexibility as our other RPC 
> servers for controlling the network interface and port used in the bind call. 
>  There is no "bind-host" property, so it is not possible to select all 
> available network interfaces via the 0.0.0.0 wildcard address.  If the 
> requested port is different from the actual bound port (i.e. setting port to 
> 0 in test cases), then there is no exposure of that actual bound port to 
> clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10357) Ozone: Replace Jersey container with Netty Container

2016-05-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10357:
---

 Summary: Ozone: Replace Jersey container with Netty Container
 Key: HDFS-10357
 URL: https://issues.apache.org/jira/browse/HDFS-10357
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


In the ozone branch, we have implemented Web Interface calls using JAX-RS. This 
was very useful when the REST interfaces where in flux. This JIRA proposes to 
replace Jersey based code with pure netty and remove any dependency that Ozone 
has on Jersey. This will create with both faster and simpler code in Ozone web 
interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267614#comment-15267614
 ] 

Xiaoyu Yao commented on HDFS-9902:
--

Looks good to me. Some NITS and suggestion:

{code}
Specific Storage type based reservation also supported. The property should be 
followed with 
corresponding storage types ([ssd]/[disk]/[archive]/[ram_disk]) for HDFS 
storage policies.
{code}

==>

{code}
Specific storage type based reservation is also supported. The property can be 
followed with
corresponding storage types ([ssd]/[disk]/[archive]/[ram_disk]) for cluster 
with heterogeneous storage.
{code}

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, 
> HDFS-9902-04.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10351) Ozone: Optimize key writes to chunks by providing a bulk write implementation in ChunkOutputStream.

2016-05-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10351:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to the feature branch. Thanks for your contribution 
[~cnauroth] and [~jingzhao] Thanks for the code reviews.

> Ozone: Optimize key writes to chunks by providing a bulk write implementation 
> in ChunkOutputStream.
> ---
>
> Key: HDFS-10351
> URL: https://issues.apache.org/jira/browse/HDFS-10351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-10351-HDFS-7240.001.patch, 
> HDFS-10351-HDFS-7240.002.patch, HDFS-10351-HDFS-7240.003.patch
>
>
> HDFS-10268 introduced the {{ChunkOutputStream}} class as part of end-to-end 
> integration of Ozone receiving key content and writing it to chunks in a 
> container.  That patch provided an implementation of the mandatory 
> single-byte write method.  We can improve performance by adding an 
> implementation of the bulk write method too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-02 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267551#comment-15267551
 ] 

Xiaowei Zhu commented on HDFS-9890:
---

Uploaded a patch with changes mainly replacing the code blocks that randomly 
throw errors with event callback logic.

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267545#comment-15267545
 ] 

Hadoop QA commented on HDFS-9902:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 115m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 46s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 252m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| JDK v1.8.0_92 Timed out junit tests | 

[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-02 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: HDFS-9890.HDFS-8707.003.patch

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10205) libhdfs++: Integrate logging with the C API

2016-05-02 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10205:
---
Assignee: (was: James Clampffer)

> libhdfs++: Integrate logging with the C API
> ---
>
> Key: HDFS-10205
> URL: https://issues.apache.org/jira/browse/HDFS-10205
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>
> Logging was added in HDFS-9118.  The C API has it's own API based on setting 
> errno that is independent from the rest of the logs.  These should be tied 
> together so C API events get logged as well when it is used.
> It might also be useful to log the event handlers added in HDFS-9616 as part 
> of this work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267491#comment-15267491
 ] 

Hadoop QA commented on HDFS-10348:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 19s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Redundant nullcheck of storageInfo, which is known to be non-null in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.markBlockAsCorrupt(BlockToMarkCorrupt,
 DatanodeStorageInfo, DatanodeDescriptor)  Redundant null check at 
BlockManager.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.markBlockAsCorrupt(BlockToMarkCorrupt,
 DatanodeStorageInfo, DatanodeDescriptor)  Redundant null check at 
BlockManager.java:[line 1410] |
| JDK v1.8.0_91 Failed junit 

[jira] [Updated] (HDFS-10320) Rack failures may result in NN terminate

2016-05-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10320:
-
Attachment: HDFS-10320.03.patch

Hi [~mingma],
Thanks a lot for the enlightening suggestion. Given we've fixed some things 
along this line (e.g. HDFS-4937) and the code is tricky (I think), refactoring 
it for better maintenance sounds to be a great idea to me.

Patch 3 attempts to do this, in the same way proposed above. (I tried to 
consider other options, but Ming's suggestion seems to be the best :)).

- Add an overload of {{NetworkTopology#chooseRandom}} to take in excludeNodes.
- The 'choose a node that's not excluded' loop is now inside {{NetworkTopology}}
- Kept the same way when choosing random as before. That is, simply get a 
random node and check it against excludeNodes, until a satisfying one is found. 
{{countNumOfAvailableNodes}} is checked before the loop, inside the lock. So we 
shouldn't have any infinite loops.
- No {{NetworkTopology#InvalidTopologyException}} is thrown from 
{{chooseRandom}}. If no available node, return {{null}}. Of course this is 
incompatible, but since the class is marked as unstable, I think we're fine. 
This also fixes the initiate of this jira - failure to choose a node shouldn't 
terminate NN - since no exception is thrown now.
- Updated necessary callers in Hadoop to handle the above change.

> Rack failures may result in NN terminate
> 
>
> Key: HDFS-10320
> URL: https://issues.apache.org/jira/browse/HDFS-10320
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10320.01.patch, HDFS-10320.02.patch, 
> HDFS-10320.03.patch
>
>
> If there're rack failures which end up leaving only 1 rack available, 
> {{BlockPlacementPolicyDefault#chooseRandom}} may get 
> {{InvalidTopologyException}} when calling {{NetworkTopology#chooseRandom}}, 
> which then throws all the way out to {{BlockManager}}'s 
> {{ReplicationMonitor}} thread and terminate the NN.
> Log:
> {noformat}
> 2016-02-24 09:22:01,514  WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For 
> more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-02-24 09:22:01,958  ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception. 
> org.apache.hadoop.net.NetworkTopology$InvalidTopologyException: Failed to 
> find datanode (scope="" excludedScope="/rack_a5").
>   at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:729)
>   at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:635)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:580)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:348)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:3746)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$200(BlockManager.java:3711)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1400)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1306)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3682)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3634)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HDFS-10354:
--
Attachment: HDFS-10354.HDFS-8707.002.patch

> Fix compilation & unit test issues on Mac OS X with clang compiler
> --
>
> Key: HDFS-10354
> URL: https://issues.apache.org/jira/browse/HDFS-10354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: OS X: 10.11
> clang: Apple LLVM version 7.0.2 (clang-700.1.81)
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
> Attachments: HDFS-10354.HDFS-8707.001.patch, 
> HDFS-10354.HDFS-8707.002.patch
>
>
> Compilation fails with multiple errors on Mac OS X.
> Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS 
> X.
> Compile error 1:
> {noformat}
>  [exec] Scanning dependencies of target common_obj
>  [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
>  [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
>  [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
>  [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
>  [exec] [ 47%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
>  [exec] [ 48%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
>  error: no viable conversion from 'optional' to 'optional'
>  [exec] return result;
>  [exec]^~
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'std::experimental::nullopt_t' for 1st 
> argument
>  [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase() {};
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'const 
> std::experimental::optional &' for 1st argument
>  [exec]   optional(const optional& rhs)
>  [exec]   ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'std::experimental::optional long> &&' for 1st argument
>  [exec]   optional(optional&& rhs) 
> noexcept(is_nothrow_move_constructible::value)
>  [exec]   ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'const long long &' for 1st argument
>  [exec]   constexpr optional(const T& v) : OptionalBase(v) {}
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'long long &&' for 1st argument
>  [exec]   constexpr optional(T&& v) : OptionalBase(constexpr_move(v)) 
> {}
>  [exec] ^
>  [exec] 1 error generated.
>  [exec] make[2]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o]
>  Error 1
>  [exec] make[1]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
>  [exec] make: *** [all] Error 2
> {noformat}
> Compile error 2:
> {noformat}
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
>  error: use of overloaded operator '<<' is ambiguous (with operand types 
> 'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
>  [exec]   << " Existing thread count = " 
> << worker_threads_.size());
>  [exec]   
> ~~~^~
> 

[jira] [Updated] (HDFS-10324) Trash directory in an encryption zone should be pre-created with sticky bit

2016-05-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10324:
---
Attachment: HDFS-10324.004.patch

Revision 4. With this patch, it addresses all of Andrew's comments in the last 
round. Thanks again for the review!

> Trash directory in an encryption zone should be pre-created with sticky bit
> ---
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10351) Ozone: Optimize key writes to chunks by providing a bulk write implementation in ChunkOutputStream.

2016-05-02 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267248#comment-15267248
 ] 

Anu Engineer commented on HDFS-10351:
-

+1, looks good to me.

on the precondition removal, I left it in because we will have other types 
getting added in future, and I did not want to forget that precondition check 
at that time. Let us remove it for now so that code is more checkstyle clean.


> Ozone: Optimize key writes to chunks by providing a bulk write implementation 
> in ChunkOutputStream.
> ---
>
> Key: HDFS-10351
> URL: https://issues.apache.org/jira/browse/HDFS-10351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-10351-HDFS-7240.001.patch, 
> HDFS-10351-HDFS-7240.002.patch, HDFS-10351-HDFS-7240.003.patch
>
>
> HDFS-10268 introduced the {{ChunkOutputStream}} class as part of end-to-end 
> integration of Ozone receiving key content and writing it to chunks in a 
> container.  That patch provided an implementation of the mandatory 
> single-byte write method.  We can improve performance by adding an 
> implementation of the bulk write method too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10351) Ozone: Optimize key writes to chunks by providing a bulk write implementation in ChunkOutputStream.

2016-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267127#comment-15267127
 ] 

Chris Nauroth commented on HDFS-10351:
--

The failure in {{TestStorageContainerManager}} is unrelated to this patch.  
There is a problem that needs to be fixed though with control of the bound 
network address used by the container server.  It goes far out of the scope of 
this issue, so I filed HDFS-10356 to track that separately.

[~anu] and [~jingzhao], what do you think of patch v003?

> Ozone: Optimize key writes to chunks by providing a bulk write implementation 
> in ChunkOutputStream.
> ---
>
> Key: HDFS-10351
> URL: https://issues.apache.org/jira/browse/HDFS-10351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-10351-HDFS-7240.001.patch, 
> HDFS-10351-HDFS-7240.002.patch, HDFS-10351-HDFS-7240.003.patch
>
>
> HDFS-10268 introduced the {{ChunkOutputStream}} class as part of end-to-end 
> integration of Ozone receiving key content and writing it to chunks in a 
> container.  That patch provided an implementation of the mandatory 
> single-byte write method.  We can improve performance by adding an 
> implementation of the bulk write method too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.

2016-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267116#comment-15267116
 ] 

Chris Nauroth commented on HDFS-10356:
--

This was discovered as fallout of a test failure in 
{{TestStorageContainerManager}} seen on HDFS-10351.  From what I see so far, 
the problems are:

# {{MiniOzoneCluster}} does not set the port to 0, so it always uses the 
default of 50011.  This is problematic for test isolation.
# Even if the port was specified as 0, there is no way for clients to pull back 
out the actual bound port that was selected.  Something likely needs to call 
{{DatanodeID#setContainerPort}} after the bind.
# There is no "bind-host" configuration property supported, so you can't use 
0.0.0.0 to bind to all available network interfaces.


> Ozone: Container server needs enhancements to control of bind address for 
> greater flexibility and testability.
> --
>
> Key: HDFS-10356
> URL: https://issues.apache.org/jira/browse/HDFS-10356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>
> The container server, as implemented in class 
> {{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
> currently does not offer the same degree of flexibility as our other RPC 
> servers for controlling the network interface and port used in the bind call. 
>  There is no "bind-host" property, so it is not possible to select all 
> available network interfaces via the 0.0.0.0 wildcard address.  If the 
> requested port is different from the actual bound port (i.e. setting port to 
> 0 in test cases), then there is no exposure of that actual bound port to 
> clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.

2016-05-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-10356:
-
Description: The container server, as implemented in class 
{{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
currently does not offer the same degree of flexibility as our other RPC 
servers for controlling the network interface and port used in the bind call.  
There is no "bind-host" property, so it is not possible to select all available 
network interfaces via the 0.0.0.0 wildcard address.  If the requested port is 
different from the actual bound port (i.e. setting port to 0 in test cases), 
then there is no exposure of that actual bound port to clients.  (was: The 
container server, as implemented in class 
{{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
currently does not offer the same degree of flexibility as our other RPC 
servers for controlling the network interface and port used in the bind call.  
There is no "bind-host" property, so it is not possible to control the exact 
network interface selected.  If the requested port is different from the actual 
bound port (i.e. setting port to 0 in test cases), then there is no exposure of 
that actual bound port to clients.)

> Ozone: Container server needs enhancements to control of bind address for 
> greater flexibility and testability.
> --
>
> Key: HDFS-10356
> URL: https://issues.apache.org/jira/browse/HDFS-10356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>
> The container server, as implemented in class 
> {{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
> currently does not offer the same degree of flexibility as our other RPC 
> servers for controlling the network interface and port used in the bind call. 
>  There is no "bind-host" property, so it is not possible to select all 
> available network interfaces via the 0.0.0.0 wildcard address.  If the 
> requested port is different from the actual bound port (i.e. setting port to 
> 0 in test cases), then there is no exposure of that actual bound port to 
> clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.

2016-05-02 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-10356:


 Summary: Ozone: Container server needs enhancements to control of 
bind address for greater flexibility and testability.
 Key: HDFS-10356
 URL: https://issues.apache.org/jira/browse/HDFS-10356
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Chris Nauroth


The container server, as implemented in class 
{{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
currently does not offer the same degree of flexibility as our other RPC 
servers for controlling the network interface and port used in the bind call.  
There is no "bind-host" property, so it is not possible to control the exact 
network interface selected.  If the requested port is different from the actual 
bound port (i.e. setting port to 0 in test cases), then there is no exposure of 
that actual bound port to clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-05-02 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-3702:

Fix Version/s: 2.8.0

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.8.0, 2.9.0
>
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702.009.patch, HDFS-3702.010.patch, 
> HDFS-3702.011.patch, HDFS-3702.012.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-05-02 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267023#comment-15267023
 ] 

Lei (Eddy) Xu commented on HDFS-3702:
-

Thanks, [~stack].  I backported it into branch 2.8.

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.8.0, 2.9.0
>
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702.009.patch, HDFS-3702.010.patch, 
> HDFS-3702.011.patch, HDFS-3702.012.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2016-05-02 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10348:
--
Status: Patch Available  (was: Open)

> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2016-05-02 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10348:
--
Attachment: HDFS-10348.patch

> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10352) Allow users to get last access time of a given directory

2016-05-02 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266999#comment-15266999
 ] 

Colin Patrick McCabe commented on HDFS-10352:
-

-1.  As [~linyiqun] commented, the performance would be bad, because it is O(N) 
in terms of number of files in the directory.  This also would be very 
confusing to operators, since it doesn't match the semantics of any other known 
filesystem or operating system.  Finally, if users want to take the maximum 
value of all the entries in a directory, they can easily do this by calling 
listDir and computing the maximum themselves.  This is just as (in)efficient as 
what is proposed here, and much cleaner.

> Allow users to get last access time of a given directory
> 
>
> Key: HDFS-10352
> URL: https://issues.apache.org/jira/browse/HDFS-10352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Eric Lin
>Assignee: Lin Yiqun
>Priority: Minor
>
> Currently FileStatus.getAccessTime() function will return 0 if path is a 
> directory, it would be ideal that if a directory path is passed, the code 
> will go through all the files under the directory and return the MAX access 
> time of all the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266985#comment-15266985
 ] 

Arpit Agarwal commented on HDFS-9902:
-

+1 pending Jenkins. Will hold off committing today to let [~xyao] take a look.

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, 
> HDFS-9902-04.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9902:
---
Attachment: HDFS-9902-04.patch

Uploaded the patch to address the above comments..

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, 
> HDFS-9902-04.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266943#comment-15266943
 ] 

Arpit Agarwal commented on HDFS-9902:
-

Thanks for addressing the feedback [~brahmareddy]. Looks like there is a typo 
in the documentation (should read _If specific storage type reservation is 
configured_). Also suggest replacing _considered_ with _used_:
{code}
327   'dfs.datanode.du.reserved.ram_disk'. If specific storage type 
reservation is configured
328   then dfs.datanode.du.reserved will be considered.
{code}
+1 with this addressed.

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266943#comment-15266943
 ] 

Arpit Agarwal edited comment on HDFS-9902 at 5/2/16 4:39 PM:
-

Thanks for addressing the feedback [~brahmareddy]. Looks like there is a typo 
in the documentation (should read _If specific storage type reservation is not 
configured_). Also suggest replacing _considered_ with _used_:
{code}
327   'dfs.datanode.du.reserved.ram_disk'. If specific storage type 
reservation is configured
328   then dfs.datanode.du.reserved will be considered.
{code}
+1 with this addressed.


was (Author: arpitagarwal):
Thanks for addressing the feedback [~brahmareddy]. Looks like there is a typo 
in the documentation (should read _If specific storage type reservation is 
configured_). Also suggest replacing _considered_ with _used_:
{code}
327   'dfs.datanode.du.reserved.ram_disk'. If specific storage type 
reservation is configured
328   then dfs.datanode.du.reserved will be considered.
{code}
+1 with this addressed.

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266715#comment-15266715
 ] 

Daryn Sharp commented on HDFS-10301:


Still catching up and need to review patch.  First question, how is this 
interleaving happening on a frequent basis?

An interesting observation (if I interpreted the logs correctly) is processing 
all 4 storages with ~14k blocks/storage appears to takes minutes to process?  
Tens of seconds appear to elapse between processing each storage.  There's some 
serious contention that seems indicative of a nasty bug or suboptimal 
configuration exacerbating this bug.

Is the DN rpc timeout set to something very low?  Has the number of RPC 
handlers been greatly increased?   Are there frequent deletes of massive trees? 
 Is there a lot of decomm'ing with a low check interval?

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.01.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-05-02 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266677#comment-15266677
 ] 

Andras Bokor commented on HDFS-9353:


[~nijel] I think the comment is good. The "if not" belongs to the second part 
of the sentence not the first one.
Anyway, it is open for misinterpretation. What do you think the comment is? How 
about this?:
"Get the password file from the user's environment variable. If it is not set 
get it from conf"

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Nicole Pazmany
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266622#comment-15266622
 ] 

Hadoop QA commented on HDFS-10354:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 41s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
11s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 58s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 55s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 36s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 36s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 36s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 37s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 37s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 37s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 39s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 39s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 27s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801732/HDFS-10354.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-10354 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 9a8ed1d7edd2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 

[jira] [Created] (HDFS-10355) Fix thread_local related build issue on Mac OS X

2016-05-02 Thread Tibor Kiss (JIRA)
Tibor Kiss created HDFS-10355:
-

 Summary: Fix thread_local related build issue on Mac OS X
 Key: HDFS-10355
 URL: https://issues.apache.org/jira/browse/HDFS-10355
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
 Environment: OS: Mac OS X 10.11
clang: Apple LLVM version 7.0.2 (clang-700.1.81)
Reporter: Tibor Kiss


The native hdfs library uses C++11 features heavily.
One of such feature is thread_local storage class which is supported in GCC, 
Visual Studio and the community version of clang compiler, but not by Apple's 
clang (which is default on OS X boxes). 
See further details here: http://stackoverflow.com/a/29929949

Even though not many Hadoop cluster runs on OS X developers still use this 
platform for development.

The problem can be solved multiple ways:
 a) Stick to gcc/g++ or community based clang on OS X. Developers will need 
extra steps to build Hadoop.
 b) Workaround thread_local with a helper class.
 c) Get rid of all the globals marked with thread_local. Interface change will 
be erquired.
 d) Disable multi threading support in the native client on OS X and document 
this limitation. 

Compile error related to thread_local:
{noformat}
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:66:1:
 error: thread-local storage is not supported for the current target
 [exec] thread_local std::string errstr;
 [exec] ^
 [exec] 1 warning and 1 error generated.
 [exec] make[2]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/hdfs.cc.o] 
Error 1
 [exec] make[1]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/all] Error 2
 [exec] make: *** [all] Error 2
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-02 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-2173:
---
Attachment: HDFS-2173.01.patch

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
> Attachments: HDFS-2173.01.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-02 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266529#comment-15266529
 ] 

Andras Bokor commented on HDFS-2173:


[~tlipcon] I created a patch for this JIRA. Could you please review? That was 
you meant here?
Thanks in advance.

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Attachments: HDFS-2173.01.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-02 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-2173:
--

Assignee: Andras Bokor

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Attachments: HDFS-2173.01.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HDFS-10354:
--
Attachment: HDFS-10354.HDFS-8707.001.patch

> Fix compilation & unit test issues on Mac OS X with clang compiler
> --
>
> Key: HDFS-10354
> URL: https://issues.apache.org/jira/browse/HDFS-10354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: OS X: 10.11
> clang: Apple LLVM version 7.0.2 (clang-700.1.81)
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
> Attachments: HDFS-10354.HDFS-8707.001.patch
>
>
> Compilation fails with multiple errors on Mac OS X.
> Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS 
> X.
> Compile error 1:
> {noformat}
>  [exec] Scanning dependencies of target common_obj
>  [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
>  [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
>  [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
>  [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
>  [exec] [ 47%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
>  [exec] [ 48%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
>  error: no viable conversion from 'optional' to 'optional'
>  [exec] return result;
>  [exec]^~
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'std::experimental::nullopt_t' for 1st 
> argument
>  [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase() {};
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'const 
> std::experimental::optional &' for 1st argument
>  [exec]   optional(const optional& rhs)
>  [exec]   ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'std::experimental::optional long> &&' for 1st argument
>  [exec]   optional(optional&& rhs) 
> noexcept(is_nothrow_move_constructible::value)
>  [exec]   ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'const long long &' for 1st argument
>  [exec]   constexpr optional(const T& v) : OptionalBase(v) {}
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'long long &&' for 1st argument
>  [exec]   constexpr optional(T&& v) : OptionalBase(constexpr_move(v)) 
> {}
>  [exec] ^
>  [exec] 1 error generated.
>  [exec] make[2]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o]
>  Error 1
>  [exec] make[1]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
>  [exec] make: *** [all] Error 2
> {noformat}
> Compile error 2:
> {noformat}
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
>  error: use of overloaded operator '<<' is ambiguous (with operand types 
> 'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
>  [exec]   << " Existing thread count = " 
> << worker_threads_.size());
>  [exec]   
> ~~~^~
> {noformat}
> There is an addition 

[jira] [Updated] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HDFS-10354:
--
Status: Patch Available  (was: Open)

> Fix compilation & unit test issues on Mac OS X with clang compiler
> --
>
> Key: HDFS-10354
> URL: https://issues.apache.org/jira/browse/HDFS-10354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: OS X: 10.11
> clang: Apple LLVM version 7.0.2 (clang-700.1.81)
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
> Attachments: HDFS-10354.HDFS-8707.001.patch
>
>
> Compilation fails with multiple errors on Mac OS X.
> Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS 
> X.
> Compile error 1:
> {noformat}
>  [exec] Scanning dependencies of target common_obj
>  [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
>  [exec] [ 45%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
>  [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
>  [exec] [ 46%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
>  [exec] [ 47%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
>  [exec] [ 48%] Building CXX object 
> main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
>  error: no viable conversion from 'optional' to 'optional'
>  [exec] return result;
>  [exec]^~
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'std::experimental::nullopt_t' for 1st 
> argument
>  [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase() {};
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'const 
> std::experimental::optional &' for 1st argument
>  [exec]   optional(const optional& rhs)
>  [exec]   ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'std::experimental::optional long> &&' for 1st argument
>  [exec]   optional(optional&& rhs) 
> noexcept(is_nothrow_move_constructible::value)
>  [exec]   ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'const long long &' for 1st argument
>  [exec]   constexpr optional(const T& v) : OptionalBase(v) {}
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
>  note: candidate constructor not viable: no known conversion from 
> 'std::experimental::optional' to 'long long &&' for 1st argument
>  [exec]   constexpr optional(T&& v) : OptionalBase(constexpr_move(v)) 
> {}
>  [exec] ^
>  [exec] 1 error generated.
>  [exec] make[2]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o]
>  Error 1
>  [exec] make[1]: *** 
> [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
>  [exec] make: *** [all] Error 2
> {noformat}
> Compile error 2:
> {noformat}
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
>  error: use of overloaded operator '<<' is ambiguous (with operand types 
> 'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
>  [exec]   << " Existing thread count = " 
> << worker_threads_.size());
>  [exec]   
> ~~~^~
> {noformat}
> There is an addition compile 

[jira] [Updated] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HDFS-10354:
--
Description: 
Compilation fails with multiple errors on Mac OS X.
Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS X.

Compile error 1:
{noformat}
 [exec] Scanning dependencies of target common_obj
 [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
 [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
 [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
 [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
 [exec] [ 47%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
 [exec] [ 48%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
 error: no viable conversion from 'optional' to 'optional'
 [exec] return result;
 [exec]^~
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'std::experimental::nullopt_t' for 1st 
argument
 [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase() {};
 [exec] ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'const std::experimental::optional &' for 1st argument
 [exec]   optional(const optional& rhs)
 [exec]   ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'std::experimental::optional 
&&' for 1st argument
 [exec]   optional(optional&& rhs) 
noexcept(is_nothrow_move_constructible::value)
 [exec]   ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'const long long &' for 1st argument
 [exec]   constexpr optional(const T& v) : OptionalBase(v) {}
 [exec] ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'long long &&' for 1st argument
 [exec]   constexpr optional(T&& v) : OptionalBase(constexpr_move(v)) {}
 [exec] ^
 [exec] 1 error generated.
 [exec] make[2]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o] 
Error 1
 [exec] make[1]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
 [exec] make: *** [all] Error 2
{noformat}

Compile error 2:
{noformat}
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
 error: use of overloaded operator '<<' is ambiguous (with operand types 
'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
 [exec]   << " Existing thread count = " << 
worker_threads_.size());
 [exec]   
~~~^~
{noformat}

There is an addition compile failure in native client related to thread_local.
The complexity of the error mandates to track that issue in a separate ticket.
{noformat}
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:66:1:
 error: thread-local storage is not supported for the current target
 [exec] thread_local std::string errstr;
 [exec] ^
 [exec] 1 warning and 1 error generated.
 [exec] make[2]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/hdfs.cc.o] 
Error 1
 [exec] make[1]: *** 
[main/native/libhdfspp/lib/bindings/c/CMakeFiles/bindings_c_obj.dir/all] Error 2
 [exec] make: *** [all] Error 2
{noformat}


Unit 

[jira] [Created] (HDFS-10354) Fix compilation & unit test issues on Mac OS X with clang compiler

2016-05-02 Thread Tibor Kiss (JIRA)
Tibor Kiss created HDFS-10354:
-

 Summary: Fix compilation & unit test issues on Mac OS X with clang 
compiler
 Key: HDFS-10354
 URL: https://issues.apache.org/jira/browse/HDFS-10354
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
 Environment: OS X: 10.11
clang: Apple LLVM version 7.0.2 (clang-700.1.81)
Reporter: Tibor Kiss
Assignee: Tibor Kiss


Compilation fails with multiple errors on Mac OS X.
Unit test test_test_libhdfs_zerocopy_hdfs_static also fails to execute on OS X.

Compile error 1:
{noformat}
 [exec] Scanning dependencies of target common_obj
 [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/base64.cc.o
 [exec] [ 45%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/status.cc.o
 [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/sasl_digest_md5.cc.o
 [exec] [ 46%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/hdfs_public_api.cc.o
 [exec] [ 47%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/options.cc.o
 [exec] [ 48%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc:85:12:
 error: no viable conversion from 'optional' to 'optional'
 [exec] return result;
 [exec]^~
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:427:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'std::experimental::nullopt_t' for 1st 
argument
 [exec]   constexpr optional(nullopt_t) noexcept : OptionalBase() {};
 [exec] ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:429:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'const std::experimental::optional &' for 1st argument
 [exec]   optional(const optional& rhs)
 [exec]   ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:438:3:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'std::experimental::optional 
&&' for 1st argument
 [exec]   optional(optional&& rhs) 
noexcept(is_nothrow_move_constructible::value)
 [exec]   ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:447:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'const long long &' for 1st argument
 [exec]   constexpr optional(const T& v) : OptionalBase(v) {}
 [exec] ^
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2/optional.hpp:449:13:
 note: candidate constructor not viable: no known conversion from 
'std::experimental::optional' to 'long long &&' for 1st argument
 [exec]   constexpr optional(T&& v) : OptionalBase(constexpr_move(v)) {}
 [exec] ^
 [exec] 1 error generated.
 [exec] make[2]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/configuration.cc.o] 
Error 1
 [exec] make[1]: *** 
[main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
 [exec] make: *** [all] Error 2
{noformat}

Compile error 2:
{noformat}
 [exec] 
/Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc:285:66:
 error: use of overloaded operator '<<' is ambiguous (with operand types 
'hdfs::LogMessage' and 'size_type' (aka 'unsigned long'))
 [exec]   << " Existing thread count = " << 
worker_threads_.size());
 [exec]   
~~~^~
{noformat}

Unit test failure:
{noformat}

{noformat}


There is an addition compile failure in native client related to thread_local.
The complexity of the error mandates to track that issue in a separate ticket.
{noformat}
 [exec]  2/16 Test  #2: test_test_libhdfs_zerocopy_hdfs_static 
..***Failed2.07 sec
 [exec] log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
 [exec] log4j:WARN Please initialize 

[jira] [Assigned] (HDFS-10352) Allow users to get last access time of a given directory

2016-05-02 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun reassigned HDFS-10352:


Assignee: Lin Yiqun

> Allow users to get last access time of a given directory
> 
>
> Key: HDFS-10352
> URL: https://issues.apache.org/jira/browse/HDFS-10352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Eric Lin
>Assignee: Lin Yiqun
>Priority: Minor
>
> Currently FileStatus.getAccessTime() function will return 0 if path is a 
> directory, it would be ideal that if a directory path is passed, the code 
> will go through all the files under the directory and return the MAX access 
> time of all the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10352) Allow users to get last access time of a given directory

2016-05-02 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266464#comment-15266464
 ] 

Lin Yiqun commented on HDFS-10352:
--

{quote}
Maybe we can create another getAccessTime() with different number of 
parameters. Default to not checking children files, but if forced, the code can 
check accordingly.
{quote}
This comment looks resonable. I will post a patch later, assign this JIRA to me.

> Allow users to get last access time of a given directory
> 
>
> Key: HDFS-10352
> URL: https://issues.apache.org/jira/browse/HDFS-10352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Eric Lin
>Priority: Minor
>
> Currently FileStatus.getAccessTime() function will return 0 if path is a 
> directory, it would be ideal that if a directory path is passed, the code 
> will go through all the files under the directory and return the MAX access 
> time of all the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266285#comment-15266285
 ] 

Hadoop QA commented on HDFS-9902:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 5s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.server.balancer.TestBalancer |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | hadoop.hdfs.TestRenameWhileOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801697/HDFS-9902-03.patch |
| JIRA 

[jira] [Updated] (HDFS-10353) hadoop-hdfs-native-client compilation failing

2016-05-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10353:

Affects Version/s: 3.0.0
  Environment: windows

> hadoop-hdfs-native-client compilation failing 
> --
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10353) hadoop-hdfs-native-client compilation failing

2016-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266198#comment-15266198
 ] 

Hadoop QA commented on HDFS-10353:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 41s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801692/HDFS-10353.patch |
| JIRA Issue | HDFS-10353 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux c84c6b27995d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 971af60 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15335/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 

[jira] [Updated] (HDFS-9902) Support different values of dfs.datanode.du.reserved per storage type

2016-05-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9902:
---
Attachment: HDFS-9902-03.patch

[~arpitagarwal] and [~xyao] thanks a lot for looking into this issue...Uploaded 
the patch to address your comments..

> Support different values of dfs.datanode.du.reserved per storage type
> -
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10353) hadoop-hdfs-native-client compilation failing

2016-05-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10353:

Status: Patch Available  (was: Open)

> hadoop-hdfs-native-client compilation failing 
> --
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10352) Allow users to get last access time of a given directory

2016-05-02 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266174#comment-15266174
 ] 

Eric Lin commented on HDFS-10352:
-

Hi [~linyiqun], Thanks for your comment. 

You have a valid point. Maybe we can create another getAccessTime() with 
different number of parameters. Default to not checking children files, but if 
forced, the code can check accordingly.

I understand its potential issue here if there are too many files under the 
directory, but this is still a handy feature that can benefit some users, and 
the edge case probably won't happen often.

Thanks

> Allow users to get last access time of a given directory
> 
>
> Key: HDFS-10352
> URL: https://issues.apache.org/jira/browse/HDFS-10352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Eric Lin
>Priority: Minor
>
> Currently FileStatus.getAccessTime() function will return 0 if path is a 
> directory, it would be ideal that if a directory path is passed, the code 
> will go through all the files under the directory and return the MAX access 
> time of all the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10352) Allow users to get last access time of a given directory

2016-05-02 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated HDFS-10352:

Issue Type: Improvement  (was: Bug)

> Allow users to get last access time of a given directory
> 
>
> Key: HDFS-10352
> URL: https://issues.apache.org/jira/browse/HDFS-10352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Eric Lin
>Priority: Minor
>
> Currently FileStatus.getAccessTime() function will return 0 if path is a 
> directory, it would be ideal that if a directory path is passed, the code 
> will go through all the files under the directory and return the MAX access 
> time of all the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10353) hadoop-hdfs-native-client compilation failing

2016-05-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10353:

Attachment: HDFS-10353.patch

Uploaded the patch kindly review..

> hadoop-hdfs-native-client compilation failing 
> --
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10353) hadoop-hdfs-native-client compilation failing

2016-05-02 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10353:
---

 Summary: hadoop-hdfs-native-client compilation failing 
 Key: HDFS-10353
 URL: https://issues.apache.org/jira/browse/HDFS-10353
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Blocker


After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
the following...

{noformat}
F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
hadoop-hdfs-native-client: An Ant BuildException has occured: 
F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
 does not exist.
around Ant part ..
 @ 14:98 in 
F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
BuildException has occured: 
F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
 does not exist.
around Ant part ..
 @ 14:98 in 
F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
at 
org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org