[jira] [Updated] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-05 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13862:

Attachment: HDFS-13862-02.patch

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604511#comment-16604511
 ] 

Hadoop QA commented on HDFS-11520:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 59s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-11520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938472/HDFS-11520.009.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a4ce86ec3aeb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df0d61e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24968/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24968/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24968/testReport/ |
| Max. process+thread count | 358 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24968/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  

[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-05 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604514#comment-16604514
 ] 

Nanda kumar commented on HDDS-400:
--

Thanks [~elek] for the patch. I agree with [~ajayydv] on this. Instead of 
iterating through all the containers from the report, we can handle this in 
{{DeadNodeHandler}}. We are already removing the replica from 
{{containerStateManager}}, we just have to emit 
{{SCMEvents.REPLICATE_CONTAINER}} event if existingReplicas doesn't match with 
expectedReplicas.

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13815) RBF: Add check to order command

2018-09-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604579#comment-16604579
 ] 

Hudson commented on HDFS-13815:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14881 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14881/])
HDFS-13815. RBF: Add check to order command. Contributed by Ranith (yqlin: rev 
9315db5f5da09c2ef86be168465c16932afa2d85)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java


> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch, HDFS-13815-008.patch, 
> HDFS-13815-009.patch, HDFS-13815-010.patch, HDFS-13815-011.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-303) Removing logic to identify containers to be closed from SCM

2018-09-05 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-303:
-
Status: Patch Available  (was: Open)

> Removing logic to identify containers to be closed from SCM
> ---
>
> Key: HDDS-303
> URL: https://issues.apache.org/jira/browse/HDDS-303
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-303.000.patch
>
>
> After HDDS-287, we identify the containers to be closed in datanode and send 
> Close ContainerAction to SCM. The code to identify containers to be closed in 
> SCM is redundant and can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604611#comment-16604611
 ] 

Elek, Marton commented on HDDS-400:
---

I am OK  with that.

But what about the new nodes? I guess we need to check the replicas not only if 
the datanode becomes dead but in case of a new datanode appears on the stage. 
(There could be over-replication).

How/where can it be handled? Do we need to check the replicas in case of the 
first container report?

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt

2018-09-05 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604418#comment-16604418
 ] 

Brahma Reddy Battula commented on HDFS-13840:
-

bq.1. In {{checkReplicaCorrupt()}}, {{isStriped()}} block check it is not 
required. EC file also handle gstamp same as continues file.

Done. I was misunderstand initially, I was thinking stripped testcases are 
expected with GS 0 .Hence I didn't modified it.

bq.2. Why the below change is required ?, if block is marked as corrupted and 
it is corrupted during write then it will marked as invalid.

Since I am adding to corrupt storages will be updated with corrupt.So when same 
DN report with new GS,we need to overwrite it. I  added the comment for same.

> RBW Blocks which are having less GS should be added to Corrupt
> --
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840-005.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-09-05 Thread Anatoli Shein (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604420#comment-16604420
 ] 

Anatoli Shein commented on HDFS-11520:
--

In the new patch I fixed a race in the previous tests, and added a new test for 
the recursive calls. Next working on retry/failover.

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.007.patch, 
> HDFS-11520.008.patch, HDFS-11520.009.patch, HDFS-11520.HDFS-8707.000.patch, 
> HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-303) Removing logic to identify containers to be closed from SCM

2018-09-05 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-303:
-
Attachment: HDDS-303.000.patch

> Removing logic to identify containers to be closed from SCM
> ---
>
> Key: HDDS-303
> URL: https://issues.apache.org/jira/browse/HDDS-303
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-303.000.patch
>
>
> After HDDS-287, we identify the containers to be closed in datanode and send 
> Close ContainerAction to SCM. The code to identify containers to be closed in 
> SCM is redundant and can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-09-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-378:
-

Assignee: Elek, Marton

> Remove dependencies between hdds/ozone and hdfs proto files
> ---
>
> Key: HDDS-378
> URL: https://issues.apache.org/jira/browse/HDDS-378
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
>
> It would be great to make the hdds/ozone proto files independent from hdfs 
> proto files. It would help as to start ozone with multiple version of hadoop 
> version.
> Also helps to make artifacts from the hdds protos:  HDDS-220
>  Currently we have a few unused "hdfs.proto" import in the proto files and we 
> use the StorageTypeProto from hdfs:
> {code}
> cd hadoop-hdds
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/ScmBlockLocationProtocol.proto:import "hdfs.proto";
> common/src/main/proto/StorageContainerLocationProtocol.proto:import 
> "hdfs.proto";
>  cd ../hadoop-ozone
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/OzoneManagerProtocol.proto:import "hdfs.proto";
> common/src/main/proto/OzoneManagerProtocol.proto:required 
> hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
> common/src/main/proto/OzoneManagerProtocol.proto:optional 
> hadoop.hdfs.StorageTypeProto storageType = 6;
> {code}
> I propose to 
> 1.) remove the hdfs import statements from the proto files
> 2.) Copy the StorageTypeProto and create a Hdds version from it (without 
> PROVIDED)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-09-05 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-389:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

> Remove XceiverServer and XceiverClient and related classes
> --
>
> Key: HDDS-389
> URL: https://issues.apache.org/jira/browse/HDDS-389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-389.001.patch
>
>
> Grpc is now the default protocol for datanode to client communication. This 
> jira proposes to remove all the instances of the classes from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt

2018-09-05 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604418#comment-16604418
 ] 

Brahma Reddy Battula edited comment on HDFS-13840 at 9/5/18 1:48 PM:
-

{quote}1. In {{checkReplicaCorrupt()}}, {{isStriped()}} block check it is not 
required. EC file also handle gstamp same as continues file.
{quote}
Done. I was misunderstand initially, I was thinking stripped testcases are 
expected with GS 0 .Hence I didn't modified it.
{quote}2. Why the below change is required ?, if block is marked as corrupted 
and it is corrupted during write then it will marked as invalid.
{quote}
Since I am adding to corrupt storages which will be updated with corrupt.So 
when same DN report with new GS,we need to overwrite it. I  added the comment 
for same.


was (Author: brahmareddy):
bq.1. In {{checkReplicaCorrupt()}}, {{isStriped()}} block check it is not 
required. EC file also handle gstamp same as continues file.

Done. I was misunderstand initially, I was thinking stripped testcases are 
expected with GS 0 .Hence I didn't modified it.

bq.2. Why the below change is required ?, if block is marked as corrupted and 
it is corrupted during write then it will marked as invalid.

Since I am adding to corrupt storages will be updated with corrupt.So when same 
DN report with new GS,we need to overwrite it. I  added the comment for same.

> RBW Blocks which are having less GS should be added to Corrupt
> --
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840-005.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13818) Extend OIV to detect FSImage corruption

2018-09-05 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604571#comment-16604571
 ] 

Gabor Bota commented on HDFS-13818:
---

Thanks for working on this [~adam.antal]. This feature is starting to look 
great.

I've noticed the following while looking into HDFS-13818.003.patch:
 * asflicense is missing in {{Corruption}} class
 * Please consider a better name for the {{Corruption}} class - like 
{{PbImageCorruption}}.
 * For Preconditions.checkState in Corruption: please add the error message, 
what was a failure. We could also consider using {{assert}} for this purpose.
 * It seems like CorruptionType could be an enum. Maybe we could even use a Set 
of those enums for different kinds of Corruption
 * Code structuring: {{OutputEntryBuilder}} could be in the 
{{PBImageCorruptionDetector}} - it will be the part of it that logic, and we 
could use {{Corruption}} just for storing data
 * Please extend the docs in 
{{hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md}} with 
the description of this feature
 * Fix checkstyle issue. There's a [link for it in the Hadoop QA's 
comment|https://builds.apache.org/job/PreCommit-HDFS-Build/24941/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt]

> Extend OIV to detect FSImage corruption
> ---
>
> Key: HDFS-13818
> URL: https://issues.apache.org/jira/browse/HDFS-13818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HDFS-13818.001.patch, HDFS-13818.002.patch, 
> HDFS-13818.003.patch, HDFS-13818.003.patch, 
> OIV_CorruptionDetector_processor.001.pdf, 
> OIV_CorruptionDetector_processor.002.pdf
>
>
> A follow-up Jira for HDFS-13031: an improvement of the OIV is suggested for 
> detecting corruptions like HDFS-13101 in an offline way.
> The reasoning is the following. Apart from a NN startup throwing the error, 
> there is nothing in the customer's hand that could reassure him/her that the 
> FSImages is good or corrupted.
> Although real full checking of the FSImage is only possible by the NN, for 
> stack traces associated with the observed corruption cases the solution of 
> putting up a tertiary NN is a little bit of overkill. The OIV would be a 
> handy choice, already having functionality like loading the fsimage and 
> constructing the folder structure, we just have to add the option of 
> detecting the null INodes. For e.g. the Delimited OIV processor can already 
> use in disk MetadataMap, which reduces memory consumption. Also there may be 
> a window for parallelizing: iterating through INodes for e.g. could be done 
> distributed, increasing efficiency, and we wouldn't need a high mem-high CPU 
> setup for just checking the FSImage.
> The suggestion is to add a --detectCorruption option to the OIV which would 
> check the FSImage for consistency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-09-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604404#comment-16604404
 ] 

Hudson commented on HDDS-333:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14880 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14880/])
HDDS-333. Create an Ozone Logo. Contributed by Priyanka Nagwekar. (elek: rev 
045270a679ffbe6ab58d4f1808cfb56c1df58e7f)
* (add) hadoop-ozone/docs/static/ozone-logo.png
* (add) hadoop-ozone/docs/static/NOTES.md


> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-333.002.patch, Logo Final.zip, 
> Logo-Ozone-Transparent-Bg.png, Ozone-Logo-Options.png, logo-vote-results.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604406#comment-16604406
 ] 

Hudson commented on HDDS-358:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14880 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14880/])
HDDS-358. Use DBStore and TableStore for DeleteKeyService. Contributed (nanda: 
rev df0d61e3a07a958fc6d71a910d928c5639011cd7)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java


> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch, 
> HDDS-358.003.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604521#comment-16604521
 ] 

Hadoop QA commented on HDFS-13862:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m  
9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13862 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938468/HDFS-13862-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7dd5f9774c27 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df0d61e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24967/testReport/ |
| Max. process+thread count | 896 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24967/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router logs are not capturing few of 

[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-05 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604537#comment-16604537
 ] 

Ayush Saxena commented on HDFS-13862:
-

Thanx [~brahmareddy] [~elgoiri] & [~SoumyaPN] for the discussion.
Have uploaded the patch with the changes.
Pls Review :)

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13815) RBF: Add check to order command

2018-09-05 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604547#comment-16604547
 ] 

Yiqun Lin commented on HDFS-13815:
--

QA report looks good. +1. Committing this.

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch, HDFS-13815-008.patch, 
> HDFS-13815-009.patch, HDFS-13815-010.patch, HDFS-13815-011.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13815) RBF: Add check to order command

2018-09-05 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13815:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.2
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-3.1.
Thanks [~SoumyaPN] for reporting this and thanks [~RANith] for the contribution.
Also thanks others for the additional review.
BTW, [~RANith], feel free to file a new JIRA for correcting the return code for 
the remaining commds I can help review.

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch, HDFS-13815-008.patch, 
> HDFS-13815-009.patch, HDFS-13815-010.patch, HDFS-13815-011.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-222) Remove hdfs command line from ozone distribution.

2018-09-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-222:
--
Summary: Remove hdfs command line from ozone distribution.  (was: Remove 
hdfs command line from ozone distrubution.)

> Remove hdfs command line from ozone distribution.
> -
>
> Key: HDDS-222
> URL: https://issues.apache.org/jira/browse/HDDS-222
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-222.001.patch, HDDS-222.002.patch, 
> HDDS-222.003.patch, HDDS-222.004.patch
>
>
> As the ozone release artifact doesn't contain a stable namenode/datanode code 
> the hdfs command should be removed from the ozone artifact.
> ozone-dist-layout-stitching also could be simplified to copy only the 
> required jar files (we don't need to copy the namenode/datanode server side 
> jars, just the common artifacts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt

2018-09-05 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13840:

Attachment: HDFS-13840-005.patch

> RBW Blocks which are having less GS should be added to Corrupt
> --
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840-005.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13815) RBF: Add check to order command

2018-09-05 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13815:
-
Attachment: HDFS-13815-011.patch

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch, HDFS-13815-008.patch, 
> HDFS-13815-009.patch, HDFS-13815-010.patch, HDFS-13815-011.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-09-05 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604407#comment-16604407
 ] 

Nanda kumar commented on HDDS-389:
--

Thanks [~candychencan] for work on this. Overall the patch looks good to me.

We can remove the following two classes as well as part of this jira:
 * 
{{org.apache.hadoop.ozone.container.common.transport.server.XceiverServerInitializer}}
 * 
{{org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler}}

> Remove XceiverServer and XceiverClient and related classes
> --
>
> Key: HDDS-389
> URL: https://issues.apache.org/jira/browse/HDDS-389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-389.001.patch
>
>
> Grpc is now the default protocol for datanode to client communication. This 
> jira proposes to remove all the instances of the classes from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13815) RBF: Add check to order command

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604543#comment-16604543
 ] 

Hadoop QA commented on HDFS-13815:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
30s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938475/HDFS-13815-011.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d1ec32f14bbd 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df0d61e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24969/testReport/ |
| Max. process+thread count | 1361 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24969/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: 

[jira] [Updated] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-09-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-378:
--
Attachment: HDDS-378.001.patch

> Remove dependencies between hdds/ozone and hdfs proto files
> ---
>
> Key: HDDS-378
> URL: https://issues.apache.org/jira/browse/HDDS-378
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-378.001.patch
>
>
> It would be great to make the hdds/ozone proto files independent from hdfs 
> proto files. It would help as to start ozone with multiple version of hadoop 
> version.
> Also helps to make artifacts from the hdds protos:  HDDS-220
>  Currently we have a few unused "hdfs.proto" import in the proto files and we 
> use the StorageTypeProto from hdfs:
> {code}
> cd hadoop-hdds
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/ScmBlockLocationProtocol.proto:import "hdfs.proto";
> common/src/main/proto/StorageContainerLocationProtocol.proto:import 
> "hdfs.proto";
>  cd ../hadoop-ozone
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/OzoneManagerProtocol.proto:import "hdfs.proto";
> common/src/main/proto/OzoneManagerProtocol.proto:required 
> hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
> common/src/main/proto/OzoneManagerProtocol.proto:optional 
> hadoop.hdfs.StorageTypeProto storageType = 6;
> {code}
> I propose to 
> 1.) remove the hdfs import statements from the proto files
> 2.) Copy the StorageTypeProto and create a Hdds version from it (without 
> PROVIDED)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-09-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-378:
--
Status: Patch Available  (was: Open)

> Remove dependencies between hdds/ozone and hdfs proto files
> ---
>
> Key: HDDS-378
> URL: https://issues.apache.org/jira/browse/HDDS-378
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-378.001.patch
>
>
> It would be great to make the hdds/ozone proto files independent from hdfs 
> proto files. It would help as to start ozone with multiple version of hadoop 
> version.
> Also helps to make artifacts from the hdds protos:  HDDS-220
>  Currently we have a few unused "hdfs.proto" import in the proto files and we 
> use the StorageTypeProto from hdfs:
> {code}
> cd hadoop-hdds
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/ScmBlockLocationProtocol.proto:import "hdfs.proto";
> common/src/main/proto/StorageContainerLocationProtocol.proto:import 
> "hdfs.proto";
>  cd ../hadoop-ozone
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/OzoneManagerProtocol.proto:import "hdfs.proto";
> common/src/main/proto/OzoneManagerProtocol.proto:required 
> hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
> common/src/main/proto/OzoneManagerProtocol.proto:optional 
> hadoop.hdfs.StorageTypeProto storageType = 6;
> {code}
> I propose to 
> 1.) remove the hdfs import statements from the proto files
> 2.) Copy the StorageTypeProto and create a Hdds version from it (without 
> PROVIDED)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-09-05 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: HDFS-11520.009.patch

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.004.patch, HDFS-11520.005.patch, HDFS-11520.007.patch, 
> HDFS-11520.008.patch, HDFS-11520.009.patch, HDFS-11520.HDFS-8707.000.patch, 
> HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13845) RBF: The default MountTableResolver should fail resolving multi-destination paths

2018-09-05 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604434#comment-16604434
 ] 

Brahma Reddy Battula commented on HDFS-13845:
-

[~hfyang20071] thanks for updating the patch.

can you rebase the patch and upload again.?

> RBF: The default MountTableResolver should fail resolving multi-destination 
> paths
> -
>
> Key: HDFS-13845
> URL: https://issues.apache.org/jira/browse/HDFS-13845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13845.001.patch, HDFS-13845.002.patch, 
> HDFS-13845.003.patch, HDFS-13845.004.patch
>
>
> When we use the default MountTableResolver to resolve the path, we cannot get 
> the destination paths for the default DestinationOrder.HASH. 
> {code:java}
> // Some comments here
> private static PathLocation buildLocation(
>   ..
> List locations = new LinkedList<>();
> for (RemoteLocation oneDst : entry.getDestinations()) {
>   String nsId = oneDst.getNameserviceId();
>   String dest = oneDst.getDest();
>   String newPath = dest;
>   if (!newPath.endsWith(Path.SEPARATOR) && !remainingPath.isEmpty()) {
> newPath += Path.SEPARATOR;
>   }
>   newPath += remainingPath;
>   RemoteLocation remoteLocation = new RemoteLocation(nsId, newPath, path);
>   locations.add(remoteLocation);
> }
> DestinationOrder order = entry.getDestOrder();
> return new PathLocation(srcPath, locations, order);
>   }
> {code}
> The default order will be hash, but the HashFirstResolver will not be invoked 
> to order the location.
> It is ambiguous for the MountTableResolver that we will see the HASH order in 
> the web ui for multi-destinations path but we cannot get the result.
> In my opinion, the MountTableResolver will be a simple resolver to implement 
> 1 to 1 not including the 1 to n destinations. So we should check the 
> buildLocation. If the entry has multi destinations, we should reject it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13815) RBF: Add check to order command

2018-09-05 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604442#comment-16604442
 ] 

Yiqun Lin commented on HDFS-13815:
--

Attach the patch to fix related failed UT just caused by the change of exit 
code.

> RBF: Add check to order command
> ---
>
> Key: HDFS-13815
> URL: https://issues.apache.org/jira/browse/HDFS-13815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, 
> HDFS-13815-003.patch, HDFS-13815-004.patch, HDFS-13815-005.patch, 
> HDFS-13815-006.patch, HDFS-13815-007.patch, HDFS-13815-008.patch, 
> HDFS-13815-009.patch, HDFS-13815-010.patch, HDFS-13815-011.patch
>
>
> No check being done on order command.
> It says successfully updated mount table if we don't specify order command 
> and it is not updated in mount table
> Execute the dfsrouter update command with the below scenarios.
> 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM
> 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM
> 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -ord RANDOM
> 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6  -orde RANDOM
>  
> The console message says, Successfully updated mount point. But it is not 
> updated in the mount table.
>  
> Expected Result:
> Exception on console as the order command is missing/not written properl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-09-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604588#comment-16604588
 ] 

Elek, Marton commented on HDDS-378:
---

Seems to be a big patch but in fact it's a very small change:

1. StorageType is cloned. With using our own StorageType the proto files don't 
depend any more on hdfs.proto which makes more easier to publish ozone 
protofiles (and manage the release)

2. hadoop-hdfs-client dependency is removed. It requires some minor changes 
(eg. using byte/string conversion utility from DfsUtils instead of the client 
utils) but it also makes easier the release.

3. Some other not-used rpc classes are removed (ScmLocatedBlock, 
LocatedContainer) 

> Remove dependencies between hdds/ozone and hdfs proto files
> ---
>
> Key: HDDS-378
> URL: https://issues.apache.org/jira/browse/HDDS-378
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-378.001.patch
>
>
> It would be great to make the hdds/ozone proto files independent from hdfs 
> proto files. It would help as to start ozone with multiple version of hadoop 
> version.
> Also helps to make artifacts from the hdds protos:  HDDS-220
>  Currently we have a few unused "hdfs.proto" import in the proto files and we 
> use the StorageTypeProto from hdfs:
> {code}
> cd hadoop-hdds
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/ScmBlockLocationProtocol.proto:import "hdfs.proto";
> common/src/main/proto/StorageContainerLocationProtocol.proto:import 
> "hdfs.proto";
>  cd ../hadoop-ozone
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/OzoneManagerProtocol.proto:import "hdfs.proto";
> common/src/main/proto/OzoneManagerProtocol.proto:required 
> hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
> common/src/main/proto/OzoneManagerProtocol.proto:optional 
> hadoop.hdfs.StorageTypeProto storageType = 6;
> {code}
> I propose to 
> 1.) remove the hdfs import statements from the proto files
> 2.) Copy the StorageTypeProto and create a Hdds version from it (without 
> PROVIDED)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-05 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604650#comment-16604650
 ] 

Ajay Kumar commented on HDDS-400:
-

In case of new node (either completely new or dead node coming back live) we 
should detect all its reported containers as new containers. So same logic 
should handle it. When a node goes dead we should remove its containers from 
ContainerStateMap so when it becomes live again its containers are detected as 
new.

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13860) Space character in the path is shown as "+" while creating dirs in WebHDFS

2018-09-05 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604681#comment-16604681
 ] 

Shashikant Banerjee commented on HDFS-13860:


The test failures seem to be unrelated as most of the tests work successfully 
on my local setup. TestWebHdfsTimeouts#testConnectTimeout fails with/without 
patch.

> Space character in the path is shown as "+" while creating dirs in WebHDFS 
> ---
>
> Key: HDFS-13860
> URL: https://issues.apache.org/jira/browse/HDFS-13860
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13860.00.patch, HDFS-13860.01.patch
>
>
> $ ./hdfs dfs -mkdir webhdfs://127.0.0.1/tmp1/"file 1"
> 2018-08-23 15:16:08,258 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> $ ./hdfs dfs -ls webhdfs://127.0.0.1/tmp1
> 2018-08-23 15:16:21,244 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Found 1 items
> drwxr-xr-x   - sbanerjee hadoop          0 2018-08-23 15:16 
> webhdfs://127.0.0.1/tmp1/file+1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-05 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v12.patch

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch, HDFS-13695.v10.patch, 
> HDFS-13695.v11.patch, HDFS-13695.v12.patch, HDFS-13695.v2.patch, 
> HDFS-13695.v3.patch, HDFS-13695.v4.patch, HDFS-13695.v5.patch, 
> HDFS-13695.v6.patch, HDFS-13695.v7.patch, HDFS-13695.v8.patch, 
> HDFS-13695.v9.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.007.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604720#comment-16604720
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v7 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-297:
-
Attachment: HDDS-297.010.patch

> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch, 
> HDDS-297.003.patch, HDDS-297.004.patch, HDDS-297.005.patch, 
> HDDS-297.006.patch, HDDS-297.007.patch, HDDS-297.008.patch, 
> HDDS-297.009.patch, HDDS-297.010.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604754#comment-16604754
 ] 

Hadoop QA commented on HDDS-378:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 15m 
26s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 15m 26s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 26s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
2m  6s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  7s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  4s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (HDDS-397) Handle deletion for keys with no blocks

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604782#comment-16604782
 ] 

Hadoop QA commented on HDDS-397:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-397 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938512/HDDS-397.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fa42160a7bb1 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e780556 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/975/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/975/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> 

[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604795#comment-16604795
 ] 

Chao Sun commented on HDFS-13791:
-

Thanks [~xkrogen]. The change of using {{SummaryStatistics}} looks good. 
Regarding the change on {{LimitedFrequencyLogHelper}}, I'm wondering instead of 
having a parent log helper, whether we could keep a map from action name to the 
actions in the log helper, and then we can do something like:

{code}
  LogAction preLogAction =
  loadEditLogHelper.logAtTime("pre", startTime, maxTxnsToRead);
  ...
  LogAction postLogAction = loadEditLogHelper
  .log("post", numEdits, edits.length(), monotonicNow() - startTime);
{code}


bq. Also, since this has refactoring, we probably need to put the relevant 
portion of it into trunk (i.e. the new class and the FSNamesystemLock changes).

Yes agree we can put this in the trunk first.

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-09-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-378:
--
Fix Version/s: 0.2.1

> Remove dependencies between hdds/ozone and hdfs proto files
> ---
>
> Key: HDDS-378
> URL: https://issues.apache.org/jira/browse/HDDS-378
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-378.001.patch
>
>
> It would be great to make the hdds/ozone proto files independent from hdfs 
> proto files. It would help as to start ozone with multiple version of hadoop 
> version.
> Also helps to make artifacts from the hdds protos:  HDDS-220
>  Currently we have a few unused "hdfs.proto" import in the proto files and we 
> use the StorageTypeProto from hdfs:
> {code}
> cd hadoop-hdds
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/ScmBlockLocationProtocol.proto:import "hdfs.proto";
> common/src/main/proto/StorageContainerLocationProtocol.proto:import 
> "hdfs.proto";
>  cd ../hadoop-ozone
> grep -r "hdfs" --include="*.proto"
> common/src/main/proto/OzoneManagerProtocol.proto:import "hdfs.proto";
> common/src/main/proto/OzoneManagerProtocol.proto:required 
> hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
> common/src/main/proto/OzoneManagerProtocol.proto:optional 
> hadoop.hdfs.StorageTypeProto storageType = 6;
> {code}
> I propose to 
> 1.) remove the hdfs import statements from the proto files
> 2.) Copy the StorageTypeProto and create a Hdds version from it (without 
> PROVIDED)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604784#comment-16604784
 ] 

Tsz Wo Nicholas Sze commented on HDDS-297:
--

The patch looks good.  Some comments:

- Since ClosePipelineInfo has a messge, addPipelineActionIfAbsent(..) may not 
work well -- There could be two CLOSE actions with differenet messages got 
added.
-* BTW, what are the possible /PipelineActionPipelineAction(s)?  Currently, we 
only has one.

- Node2PipelineMap.getPipelines should not call computeIfPresent(..).  It will 
compute a new mapping.  The code should be something like below.
{code}
final Set s = dn2PipelineMap.get(datanode);
return s != null? Collections.unmodifiableSet(s): Collections.emptySet();
{code}


> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch, 
> HDDS-297.003.patch, HDDS-297.004.patch, HDDS-297.005.patch, 
> HDDS-297.006.patch, HDDS-297.007.patch, HDDS-297.008.patch, 
> HDDS-297.009.patch, HDDS-297.010.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604815#comment-16604815
 ] 

Hadoop QA commented on HDFS-13868:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938372/HDFS-13868.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 07dfd2e69a6f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-05 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-13868:

Status: Patch Available  (was: In Progress)

> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.0.3, 3.1.0
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch, HDFS-13868.002.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604638#comment-16604638
 ] 

Hadoop QA commented on HDFS-13840:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13840 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938471/HDFS-13840-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5927dfb20df6 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df0d61e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24966/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24966/testReport/ |
| Max. process+thread count | 3939 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-222) Remove hdfs command line from ozone distribution.

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604651#comment-16604651
 ] 

Hadoop QA commented on HDDS-222:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
36s{color} | {color:red} The patch generated 2 new + 1 unchanged - 1 fixed = 3 
total (was 2) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 11s{color} | {color:orange} The patch generated 2 new + 114 unchanged - 0 
fixed = 116 total (was 114) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
12s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 15s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskChecker |
|   | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-222 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937739/HDDS-222.004.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  xml  |
| uname | Linux ec78c8b8db0c 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df0d61e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| shellcheck | 

[jira] [Commented] (HDDS-303) Removing logic to identify containers to be closed from SCM

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604654#comment-16604654
 ] 

Hadoop QA commented on HDDS-303:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdds: The patch generated 1 new + 0 
unchanged - 1 fixed = 1 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938482/HDDS-303.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f3591cbcf7c9 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-397:
-
Attachment: HDDS-397.001.patch

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-05 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604721#comment-16604721
 ] 

Anu Engineer commented on HDDS-351:
---

[~ajayydv] Thanks for updating the patch. I realize that I may not have been 
very clear in my comments. Here is how I think we should test 
ozoneChillModeManager.

 # OzoneChillModeManager is capable of listening to Events.
# We have an initial state --  Make OzoneManager read the Goal states too, that 
is instead of querying an internal state like ContainerStateManager.count, you 
pass that into your class explicitly.
# Now you can set up an OzoneManager more easily, with Goal states in your test 
case.
# For example, you need to test the case where the container count is 0. Since 
you can initialize the OzoneManager with expected container count, it is easier 
to create a test.
# Now you can create a case where you say, expected container count is 1. Now 
you can send an event from the EventQueue, which makes it easy to send a 
container Report. 
# Then you can send a container report with 2 containers, even while the 
cluster has only 1 container. This is a negative test case that allows you to 
verify the behavior of OzoneManager.

 Bottom Line: I think we should stop using MiniOzoneCluster for Unit Tests. We 
should use that only for Integration tests. This limitation makes us think more 
clearly about how unit tests should be written.


 

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-397:
-
Status: Patch Available  (was: Open)

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-05 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604751#comment-16604751
 ] 

Hrishikesh Gadre commented on HDFS-13820:
-

Thanks [~xiaochen]. Note that the unit test failure is unrelated to this patch 
and is covered as part of HDFS-13662. I address this review comment and upload 
a new patch soon.

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13662) TestBlockReaderLocal#testStatisticsForErasureCodingRead is flaky

2018-09-05 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre reassigned HDFS-13662:
---

Assignee: Hrishikesh Gadre

> TestBlockReaderLocal#testStatisticsForErasureCodingRead is flaky
> 
>
> Key: HDFS-13662
> URL: https://issues.apache.org/jira/browse/HDFS-13662
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Wei-Chiu Chuang
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> The test failed in this precommit for a patch that only modifies an unrelated 
> test.
> https://builds.apache.org/job/PreCommit-HDFS-Build/24401/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocal/testStatisticsForErasureCodingRead/
> This test also failed occasionally in our internal test.
> {noformat}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testStatisticsForErasureCodingRead(TestBlockReaderLocal.java:842)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10919) Provide admin/debug tool to dump out info of a given block

2018-09-05 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-10919:
-

Assignee: Shweta

> Provide admin/debug tool to dump out info of a given block
> --
>
> Key: HDFS-10919
> URL: https://issues.apache.org/jira/browse/HDFS-10919
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
>
> We have fsck to find out blocks associated with a file, which is nice. 
> Sometimes, we saw trouble with a specific block, we'd like to collect info 
> about this block, such as
> * what file this block belong to, 
> * where the replicas of this block are located, 
> * whether the block is EC coded; 
> * if a block is EC coded, whether it's a data block, or code
> * if a block is EC coded, what's the codec.
> * if a block is EC coded, what's the block group
> * for the block group, what are the other blocks
> Create this jira to provide such a util, as dfsadmin, or a debug tool.
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10

2018-09-05 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604733#comment-16604733
 ] 

Shweta commented on HDFS-13882:
---

Thanks for the patch [~knanasi]. As seen above, Jenkins complains about unit 
test failures. Please check if they are relevant to the changes you made and if 
these test run fine locally for you.
Also, The TestWebHdfsTimeouts.testAuthUrlConnectTimeout has failed in the past 
for https://issues.apache.org/jira/browse/HDFS-10905. Please check if it is 
relevant in any way. 

> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 
> 10
> ---
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604811#comment-16604811
 ] 

Hadoop QA commented on HDDS-297:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 32s{color} | {color:orange} root: The patch generated 1 new + 22 unchanged - 
5 fixed = 23 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 49s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 17s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense 

[jira] [Commented] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604676#comment-16604676
 ] 

Shashikant Banerjee commented on HDDS-297:
--

Thanks [~nandakumar131], for the review. patch v10 addresses your review 
comments.

..Why do we need the old raftGroup for re-initialization?

We need the raft groupId during reinitialization as with MutiRaft support in 
Ratis, during reinitialize the old RaftGroupId is removed and new one is added 
for raftServerImpl instance. Below is the code for reference:
{code:java}
public CompletableFuture reinitializeAsync(
ReinitializeRequest request) throws IOException {
  LOG.info("{}: reinitialize* {}", getId(), request);
  if (!reinitializeRequest.compareAndSet(null, request)) {
throw new IOException("Another reinitialize is already in progress.");
  }
  final RaftGroupId oldGroupId = request.getRaftGroupId();
  return getImplFuture(oldGroupId)
  .thenAcceptAsync(RaftServerImpl::shutdown)
  .thenAccept(_1 -> impls.remove(oldGroupId))
  .thenCompose(_1 -> impls.addNew(request.getGroup()))
  .thenApply(newImpl -> {
LOG.debug("{}: newImpl = {}", getId(), newImpl);
final boolean started = newImpl.start();
Preconditions.assertTrue(started, () -> getId()+ ": failed to start a 
new impl: " + newImpl);
return new RaftClientReply(request, newImpl.getCommitInfos());
  })
  .whenComplete((_1, throwable) -> {
if (throwable != null) {
  impls.remove(request.getGroup().getGroupId());
  LOG.warn(getId() + ": Failed reinitialize* " + request, throwable);
}

reinitializeRequest.set(null);
  });
}

{code}

> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch, 
> HDDS-297.003.patch, HDDS-297.004.patch, HDDS-297.005.patch, 
> HDDS-297.006.patch, HDDS-297.007.patch, HDDS-297.008.patch, 
> HDDS-297.009.patch, HDDS-297.010.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-05 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13820:

Attachment: HDFS-13820-002.patch

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch, HDFS-13820-002.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-303) Removing logic to identify containers to be closed from SCM

2018-09-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-303:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131] for the contribution. I've committed the patch to 
trunk. 

> Removing logic to identify containers to be closed from SCM
> ---
>
> Key: HDDS-303
> URL: https://issues.apache.org/jira/browse/HDDS-303
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-303.000.patch
>
>
> After HDDS-287, we identify the containers to be closed in datanode and send 
> Close ContainerAction to SCM. The code to identify containers to be closed in 
> SCM is redundant and can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-403) infoKey shows wrong "createdOn", "modifiedOn" metadata for key

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-403:
---
Labels: newbie  (was: )

> infoKey shows wrong "createdOn", "modifiedOn" metadata for key
> --
>
> Key: HDDS-403
> URL: https://issues.apache.org/jira/browse/HDDS-403
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> 1. ran putKey command for a file
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -putKey 
> /test-vol1/test-bucket1/file1 -file /etc/passwd -v
> 2018-09-05 10:25:11,498 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Volume Name : test-vol1
> Bucket Name : test-bucket1
> Key Name : file1
> File Hash : 8164cc3d5b05c44b73a6277661aa4645
> 2018-09-05 10:25:12,377 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-09-05 10:25:12,390 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:12,402 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-09-05 10:25:12,407 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-09-05 10:25:12,407 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-09-05 10:25:12,518 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:12,518 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:12,866 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default)
> 2018-09-05 10:25:13,644 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:13,644 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:13,645 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default)
> [root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -getKey 
> /test-vol1/test-bucket1/file1 -file getkey3
> 2018-09-05 10:25:22,020 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-09-05 10:25:22,778 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-09-05 10:25:22,790 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:22,800 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-09-05 10:25:22,804 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-09-05 10:25:22,805 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-09-05 10:25:22,890 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:22,890 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:23,250 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default)
> 2018-09-05 10:25:24,066 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:24,067 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:24,067 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
> 2. Ran infoKey on that key
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -infoKey 
> /test-vol1/test-bucket1/file1 -v
> 2018-09-05 10:54:42,053 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Volume Name : test-vol1
> Bucket Name : test-bucket1
> Key Name : file1
> {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Sat, 14 Dec +114522267 00:51:17 GMT",
>  "modifiedOn" : "Fri, 09 Jun +50648 04:30:12 GMT",
>  "size" : 4659,
>  "keyName" : "file1",
>  "keyLocations" : [ {
>  "containerID" : 16,
>  "localID" : 1536143112267,
>  "length" : 4659,
>  "offset" : 0
>  } ]
> }{noformat}
> "createdOn" and "modifiedOn" metadata are incorrect.
> Here is the current date:
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-03 bin]# date
> Wed Sep 5 10:54:52 UTC 2018{noformat}
> Also , the "md5hash" for the key is showing as null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604838#comment-16604838
 ] 

Hadoop QA commented on HDDS-325:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 36s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 33s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.command.TestCommandStatusReportHandler |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-09-05 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604892#comment-16604892
 ] 

Xiaoyu Yao commented on HDFS-13697:
---

Thanks [~xiaochen] for the heads up. Looking at the patch now...

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch, HDFS-13697.07.patch, HDFS-13697.08.patch, 
> HDFS-13697.09.patch, HDFS-13697.10.patch, HDFS-13697.prelim.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
> [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
> [encrypted_key]!!
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>  at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832)
>  at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209)
>  at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205)
>  at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>  at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205)
> 

[jira] [Commented] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604921#comment-16604921
 ] 

Hadoop QA commented on HDDS-297:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDDS-297 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-297 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938535/HDDS-297.011.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/977/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch, 
> HDDS-297.003.patch, HDDS-297.004.patch, HDDS-297.005.patch, 
> HDDS-297.006.patch, HDDS-297.007.patch, HDDS-297.008.patch, 
> HDDS-297.009.patch, HDDS-297.010.patch, HDDS-297.011.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-05 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v13.patch

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch, HDFS-13695.v10.patch, 
> HDFS-13695.v11.patch, HDFS-13695.v12.patch, HDFS-13695.v13.patch, 
> HDFS-13695.v2.patch, HDFS-13695.v3.patch, HDFS-13695.v4.patch, 
> HDFS-13695.v5.patch, HDFS-13695.v6.patch, HDFS-13695.v7.patch, 
> HDFS-13695.v8.patch, HDFS-13695.v9.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-05 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604861#comment-16604861
 ] 

Siyao Meng commented on HDFS-13868:
---

FYI there is a TestWebHDFS#testWebHdfsSnapshotDiff(), added in HDFS-13052.

> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch, HDFS-13868.002.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13893:
-
Labels: newbie  (was: )

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: newbie
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604913#comment-16604913
 ] 

Shashikant Banerjee commented on HDDS-297:
--

Thanks [~szetszwo], for the review.Patch v11 addresses your review comments.

..what are the possible /PipelineActionPipelineAction(s)? Currently, we only 
has one.

Yes, we currently have only one action that is closing the pipeline. With 2 
node failure handling, we may (or may not) need to add more actions like moving 
pipeline to quasi closed state.

testCloseContainerEventWithRatis failure is happening probably using 
MockNodeManger for initializing the ratis pipeline which is failing.

Will be addressing the failure in a separate jira.

> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch, 
> HDDS-297.003.patch, HDDS-297.004.patch, HDDS-297.005.patch, 
> HDDS-297.006.patch, HDDS-297.007.patch, HDDS-297.008.patch, 
> HDDS-297.009.patch, HDDS-297.010.patch, HDDS-297.011.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604935#comment-16604935
 ] 

Hadoop QA commented on HDFS-13695:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 213 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
53s{color} | {color:green} hadoop-hdfs-project generated 0 new + 466 unchanged 
- 113 fixed = 466 total (was 579) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 12s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
6737 unchanged - 81 fixed = 6742 total (was 6818) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 
32s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938510/HDFS-13695.v12.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8a5d6d1c83a4 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-05 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604971#comment-16604971
 ] 

Botong Huang commented on HDFS-13695:
-

Almost there! Please fix the last few checkstyle warnings. e.g. adding "final" 
to the logger. 

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch, HDFS-13695.v10.patch, 
> HDFS-13695.v11.patch, HDFS-13695.v12.patch, HDFS-13695.v2.patch, 
> HDFS-13695.v3.patch, HDFS-13695.v4.patch, HDFS-13695.v5.patch, 
> HDFS-13695.v6.patch, HDFS-13695.v7.patch, HDFS-13695.v8.patch, 
> HDFS-13695.v9.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-05 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v14.patch

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch, HDFS-13695.v10.patch, 
> HDFS-13695.v11.patch, HDFS-13695.v12.patch, HDFS-13695.v13.patch, 
> HDFS-13695.v14.patch, HDFS-13695.v2.patch, HDFS-13695.v3.patch, 
> HDFS-13695.v4.patch, HDFS-13695.v5.patch, HDFS-13695.v6.patch, 
> HDFS-13695.v7.patch, HDFS-13695.v8.patch, HDFS-13695.v9.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604819#comment-16604819
 ] 

Hadoop QA commented on HDFS-13868:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938372/HDFS-13868.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 996c265c831d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-303) Removing logic to identify containers to be closed from SCM

2018-09-05 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604821#comment-16604821
 ] 

Xiaoyu Yao commented on HDDS-303:
-

Thanks [~nandakumar131] for working on this. The patch looks good to me, +1.

I just have one NIT which I will remove at commit. 

HddsDispatcher.java

Line 29: unused imports

 

 

> Removing logic to identify containers to be closed from SCM
> ---
>
> Key: HDDS-303
> URL: https://issues.apache.org/jira/browse/HDDS-303
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-303.000.patch
>
>
> After HDDS-287, we identify the containers to be closed in datanode and send 
> Close ContainerAction to SCM. The code to identify containers to be closed in 
> SCM is redundant and can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604977#comment-16604977
 ] 

Hadoop QA commented on HDFS-13820:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 502 unchanged - 0 fixed = 508 total (was 502) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938520/HDFS-13820-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux a4bd989366aa 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9af96d4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-05 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Attachment: HDFS-13749-HDFS-12943.001.patch

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13892) Disk Balancer : Invalid exit code for disk balancer execute command

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13892:
-
Labels: newbie  (was: )

> Disk Balancer : Invalid exit code for disk balancer execute command
> ---
>
> Key: HDFS-13892
> URL: https://issues.apache.org/jira/browse/HDFS-13892
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: newbie
>
> {{scenario:-}}
> 1. Write some 5GB data with one DISK
>  2. Add one more non-empty Disk to above Datanode 
>  3.Run the plan command for the above specific datanode 
>  4. run the Execute command with the above plan file
>  the above execute command not happened as per the datanode log
> {noformat}
> ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination 
> volume: file:/Test_Disk/DISK2/ does not have enough space to accommodate a 
> block. Block Size: 268435456 Exiting from copyBlocks.
> {noformat}
> 5. see the exit code for execute command, it display the 0
> {{Expected Result :-}}
> 1. Exit code should be 1 why means execution was not happened 
>  2. In this type of scenario In console print the that error message that 
> time customer/user knows execute was not happened.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-9059:
-
  Assignee: (was: Jagadesh Kiran N)

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9059:

Labels: newbie  (was: )

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-297) Add pipeline actions in Ozone

2018-09-05 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-297:
-
Attachment: HDDS-297.011.patch

> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch, 
> HDDS-297.003.patch, HDDS-297.004.patch, HDDS-297.005.patch, 
> HDDS-297.006.patch, HDDS-297.007.patch, HDDS-297.008.patch, 
> HDDS-297.009.patch, HDDS-297.010.patch, HDDS-297.011.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-303) Removing logic to identify containers to be closed from SCM

2018-09-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604912#comment-16604912
 ] 

Hudson commented on HDDS-303:
-

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #14884 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14884/])
HDDS-303. Removing logic to identify containers to be closed from SCM. (xyao: 
rev 8286bf2d1fe7a9051ee93ca6e4dd13e5348d00b8)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerMapping.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (delete) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/closer/TestContainerCloser.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java


> Removing logic to identify containers to be closed from SCM
> ---
>
> Key: HDDS-303
> URL: https://issues.apache.org/jira/browse/HDDS-303
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-303.000.patch
>
>
> After HDDS-287, we identify the containers to be closed in datanode and send 
> Close ContainerAction to SCM. The code to identify containers to be closed in 
> SCM is redundant and can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604931#comment-16604931
 ] 

Wei-Chiu Chuang commented on HDFS-13868:


Thanks [~pranay_singh] the patch looks good to me overall. It would be even 
better to add webhdfs tests. Like what [~smeng] mentioned, we should be able to 
add a similar test in TestWebHDFS#testWebHdfsSnapshotDiff().

> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch, HDFS-13868.002.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604942#comment-16604942
 ] 

Hadoop QA commented on HDFS-13868:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.namenode.TestFSImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938372/HDFS-13868.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0e7f57d07210 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605002#comment-16605002
 ] 

Chao Sun commented on HDFS-13749:
-

Attached patch v1. [~xkrogen]: turns out it's not easy to reuse 
{{NameNodeProxies}} since it is in a different module. Here I still called 
{{HAServiceProtocolClientSideTranslatorPB}} but it is semantically the same as 
what {{NameNodeProxies}} does.

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-09-05 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: (was: HDDS-351.06.patch)

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-09-05 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: HDDS-351.06.patch

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-05 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604040#comment-16604040
 ] 

Ajay Kumar commented on HDDS-351:
-

[~anu] New test case in {{TestStorageContainerManager}} already tests change in 
container threshold implicitly. Updated test case in patch v6 to check for 
quantitative change in threshold after every dn start.

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13836) RBF: To handle the exception when the mounttable znode have null value.

2018-09-05 Thread yanghuafeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604073#comment-16604073
 ] 

yanghuafeng commented on HDFS-13836:


Thanks for your tests. If there are other problems, please let me know. Thanks 
again. [~elgoiri]

> RBF: To handle the exception when the mounttable znode have null value.
> ---
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.0, 3.0.0, 3.1.0, 3.2.0
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch, HDFS-13836.004.patch, HDFS-13836.005.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-09-05 Thread chencan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603959#comment-16603959
 ] 

chencan commented on HDDS-389:
--

Hi [~msingh], I have submitted a path, Please help to see if it's what you 
expected. Thanks!

> Remove XceiverServer and XceiverClient and related classes
> --
>
> Key: HDDS-389
> URL: https://issues.apache.org/jira/browse/HDDS-389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-389.001.patch
>
>
> Grpc is now the default protocol for datanode to client communication. This 
> jira proposes to remove all the instances of the classes from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-09-05 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-389:
-
Attachment: HDDS-389.001.patch

> Remove XceiverServer and XceiverClient and related classes
> --
>
> Key: HDDS-389
> URL: https://issues.apache.org/jira/browse/HDDS-389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-389.001.patch
>
>
> Grpc is now the default protocol for datanode to client communication. This 
> jira proposes to remove all the instances of the classes from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-09-05 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-389:
-
Status: Patch Available  (was: Open)

> Remove XceiverServer and XceiverClient and related classes
> --
>
> Key: HDDS-389
> URL: https://issues.apache.org/jira/browse/HDDS-389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-389.001.patch
>
>
> Grpc is now the default protocol for datanode to client communication. This 
> jira proposes to remove all the instances of the classes from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-268) Add SCM close container watcher

2018-09-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603983#comment-16603983
 ] 

Hudson commented on HDDS-268:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14878 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14878/])
HDDS-268. Add SCM close container watcher. Contributed by Ajay Kumar. (xyao: 
rev 85c3fe341a77bc1a74fdc7af64e18e4557fa8e96)
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerWatcher.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/command/CommandStatusReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/TestCloseContainerWatcher.java


> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch, 
> HDDS-268.03.patch, HDDS-268.04.patch, HDDS-268.05.patch
>
>
> Add a event watcher for CLOSE_CONTAINER_STATUS events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-09-05 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604057#comment-16604057
 ] 

LiXin Ge commented on HDDS-336:
---

[~anu] Much appreciated! I don't know how to use the tag you mentioned, as no 
difference was found when I browse this JIRA system now. Could you please give 
me a hint?

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7033) dfs.web.authentication.filter should be documented

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603953#comment-16603953
 ] 

Hadoop QA commented on HDFS-7033:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
60m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}241m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-7033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938396/HDFS-7033.00.patch |
| Optional Tests |  dupname  asflicense  mvnsite  compile  javac  javadoc  
mvninstall  unit  shadedclient  xml  |
| uname | Linux 27767b4dfebb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6883fe8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 

[jira] [Commented] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604091#comment-16604091
 ] 

Hadoop QA commented on HDDS-389:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
50s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-389 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938416/HDDS-389.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (HDDS-333) Create an Ozone Logo

2018-09-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-333:
--
Attachment: HDDS-333.001.patch

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-333.001.patch, HDDS-333.002.patch, Logo Final.zip, 
> Logo-Ozone-Transparent-Bg.png, Ozone-Logo-Options.png, logo-vote-results.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-333) Create an Ozone Logo

2018-09-05 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-333:
--
Attachment: HDDS-333.002.patch

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-333.001.patch, HDDS-333.002.patch, Logo Final.zip, 
> Logo-Ozone-Transparent-Bg.png, Ozone-Logo-Options.png, logo-vote-results.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-09-05 Thread yanghuafeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604101#comment-16604101
 ] 

yanghuafeng commented on HDFS-13852:


Thanks four your review. If other problems, please let me know. Thanks again. 
[~elgoiri]

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852.001.patch, HDFS-13852.002.patch, 
> HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-190) Improve shell error message for unrecognized option

2018-09-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604111#comment-16604111
 ] 

Elek, Marton edited comment on HDDS-190 at 9/5/18 8:28 AM:
---

My proposal is to use picocli. HDDS-379 introduced a new simple GenericCli 
interface which can support more advanced subcommand parsing. HDDS-398 also 
adopts this approach. 

I would implement it as soon as possible, as (IMHO) it can help a lot for the 
first adopters to learn the usage of the system.

Let me know if you are planning to implement it, or we can move back to the 
unassigned pool. Would be happy to help if you need more information...


was (Author: elek):
My proposal is to use picocli. HDDS-379 introduced a new simple GenericCli 
interface which can support more advanced subcommand parsing. HDDS-398 also 
adopts this approach. 

I would implement it as soon as possible, as (IMHO) it help a lot for the first 
adopters to learn the usage of the system.

Let me know if you are planning to implement it, or we can move back to the 
unassigned pool. Would be happy to help if you need more information...

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-190) Improve shell error message for unrecognized option

2018-09-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604111#comment-16604111
 ] 

Elek, Marton commented on HDDS-190:
---

My proposal is to use picocli. HDDS-379 introduced a new simple GenericCli 
interface which can support more advanced subcommand parsing. HDDS-398 also 
adopts this approach. 

I would implement it as soon as possible, as (IMHO) it help a lot for the first 
adopters to learn the usage of the system.

Let me know if you are planning to implement it, or we can move back to the 
unassigned pool. Would be happy to help if you need more information...

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604137#comment-16604137
 ] 

Elek, Marton commented on HDDS-400:
---

Yes. That's an other option to check replication state only in case of node 
failures. This was more safe to me (will work, even if datanode failure or 
problems in the retry logic) and the only downside is the performance 
implication but I don't think it has big overhead. It also handles the case 
when a new node is introduced which has some more replicas.

I am not sure which one is the best option this is just a just good enough 
implementation for me.

Maybe we can modify to Mapping/ContainerStateManager to check the replication 
numbers in case of any replication information update. But it would be part of 
a bigger refactor of the container report handling.

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-333) Create an Ozone Logo

2018-09-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604140#comment-16604140
 ] 

Elek, Marton commented on HDDS-333:
---

Thanks a lot [~priyanka.nagwekar] the great logo(s) and [~arpitagarwal] to 
manage the vote. 

As there is no more work here I will commit the content of Logo Final.zip to 
the ozone docs projects and to the site repository. 

> Create an Ozone Logo
> 
>
> Key: HDDS-333
> URL: https://issues.apache.org/jira/browse/HDDS-333
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Priyanka Nagwekar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Logo Final.zip, Logo-Ozone-Transparent-Bg.png, 
> Ozone-Logo-Options.png, logo-vote-results.png
>
>
> As part of developing Ozone Website and Documentation, It would be nice to 
> have an Ozone Logo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13896:

Description: 
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

WebUI : 
h1.  
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
16:50:54|2018/09/05 15:02:25|
 
 

 

 

 

  was:
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Mount Table Entries:
 Source Destinations Owner Group Mode Quota/Usage
 /apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

WebUI : 
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
15:02:54|2018/09/05 15:02:25|

 

 

 

 

 


> RBF Web UI not displaying clearly which target path is pointing to which name 
> service in mount table 
> -
>
> Key: HDFS-13896
> URL: https://issues.apache.org/jira/browse/HDFS-13896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
>
> Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
>  18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
>  18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
> 18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Successfully added mount point /apps
> Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
>  18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
> rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
> WebUI : 
> h1.  
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
> |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
> 16:50:54|2018/09/05 15:02:25|
>  
>  
>  
>  
>  



--

[jira] [Updated] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13896:

Description: 
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
 18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Mount Table Entries:
 Source Destinations Owner Group Mode Quota/Usage
 /apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

WebUI : Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
16:50:54|2018/09/05 15:02:25|

 
  

 

 

 

  was:
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

WebUI : 
h1.  
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
16:50:54|2018/09/05 15:02:25|
 
 

 

 

 


> RBF Web UI not displaying clearly which target path is pointing to which name 
> service in mount table 
> -
>
> Key: HDFS-13896
> URL: https://issues.apache.org/jira/browse/HDFS-13896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
>
> Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
>  18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
>  18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
>  18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
>  18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Mount Table Entries:
>  Source Destinations Owner Group Mode Quota/Usage
>  /apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
> rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
> WebUI : Mount Table
> ||Global path||Target nameservice||Target 

  1   2   >