[jira] [Commented] (HDFS-7076) Allow users to define custom storage policies

2018-10-09 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644499#comment-16644499
 ] 

Xiang Li commented on HDFS-7076:


Hi all, if you are about to use the patch, please be aware that Mover does not 
work but it is easy to fix.

*Stack trace*
{code:java}
java.lang.ArrayIndexOutOfBoundsException: 16
at 
org.apache.hadoop.hdfs.server.mover.Mover.initStoragePolicies(Mover.java:159)
at org.apache.hadoop.hdfs.server.mover.Mover.init(Mover.java:141)
at org.apache.hadoop.hdfs.server.mover.Mover.run(Mover.java:165)
at org.apache.hadoop.hdfs.server.mover.Mover.run(Mover.java:568)
at org.apache.hadoop.hdfs.server.mover.Mover$Cli.run(Mover.java:696)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hdfs.server.mover.Mover.main(Mover.java:727)
{code}

*Reason*
The size of BlockStoragePolicy array is hardcoded to 16, but the ID of a custom 
storage policy is always >= 16

*Fix*
{code:java}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
index 5fcd29f..d721f6b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
@@ -114,7 +114,7 @@ private StorageGroup getTarget(String uuid, StorageType 
storageType) {

   private final BlockStoragePolicy[] blockStoragePolicies;

-  Mover(NameNodeConnector nnc, Configuration conf, AtomicInteger retryCount) {
+  Mover(NameNodeConnector nnc, Configuration conf, AtomicInteger retryCount) 
throws IOException {
 final long movedWinWidth = conf.getLong(
 DFSConfigKeys.DFS_MOVER_MOVEDWINWIDTH_KEY,
 DFSConfigKeys.DFS_MOVER_MOVEDWINWIDTH_DEFAULT);
@@ -133,8 +133,17 @@ private StorageGroup getTarget(String uuid, StorageType 
storageType) {
 maxConcurrentMovesPerNode, conf);
 this.storages = new StorageMap();
 this.targetPaths = nnc.getTargetPaths();
-this.blockStoragePolicies = new BlockStoragePolicy[1 <<
-BlockStoragePolicySuite.ID_BIT_LENGTH];
+
+// Set the size of blockStoragePolicies array according to the current 
HDFS setup
+BlockStoragePolicy[] policies = 
dispatcher.getDistributedFileSystem().getStoragePolicies();
+int size = BlockStoragePolicySuite.RESERVED_POLICY_NUM;
+for (BlockStoragePolicy policy : policies) {
+  int id = policy.getId();
+  if (size < id + 1) {
+size = id + 1;  // set size to the max id + 1
+  }
+}
+this.blockStoragePolicies = new BlockStoragePolicy[size];
   }
{code}

Need some efforts to add the change above (and UT) into the latest patch (008)

> Allow users to define custom storage policies
> -
>
> Key: HDFS-7076
> URL: https://issues.apache.org/jira/browse/HDFS-7076
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7076.000.patch, HDFS-7076.001.patch, 
> HDFS-7076.002.patch, HDFS-7076.003.patch, HDFS-7076.004.patch, 
> HDFS-7076.005.patch, HDFS-7076.005.patch, HDFS-7076.007.patch, 
> HDFS-7076.008.patch, editsStored
>
>
> Currently block storage policies are hard coded.  This JIRA is to persist the 
> policies in FSImage and Edit Log in order to support adding new policies or 
> modifying existing policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644475#comment-16644475
 ] 

Hadoop QA commented on HDDS-517:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-ozone/s3gateway: The patch generated 3 
new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-517 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943170/HDDS-517.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 75945628bbee 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / edce866 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1327/artifact/out/diff-checkstyle-hadoop-ozone_s3gateway.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1327/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1327/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-439:
---
Attachment: HDDS-439.001.patch
Status: Patch Available  (was: Open)

[~arpitagarwal] Thank you for reporting the issue. Attached patch 001 for your 
review.

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Blocker
>  Labels: alpha2, newbie
> Attachments: HDDS-439.001.patch
>
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644458#comment-16644458
 ] 

Hadoop QA commented on HDDS-522:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943167/HDDS-522.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c1f7a2fc1057 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 

[jira] [Assigned] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-439:
--

Assignee: Dinesh Chitlangia

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Blocker
>  Labels: alpha2, newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1662#comment-1662
 ] 

Dinesh Chitlangia commented on HDDS-604:


[~anu] thanks for the commit.

> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-604.001.patch
>
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone getozoneconf
> ozone getconf is utility for getting configuration information from the 
> config file.
> ozone getconf
>   [-includeFile]  gets the include file path that defines 
> the datanodes that can join the cluster.
>   [-excludeFile]  gets the exclude file path that defines 
> the datanodes that need to decommissioned.
>   [-ozonemanagers]gets list of Ozone Manager 
> nodes in the cluster
>   [-storagecontainermanagers] gets list of ozone 
> storage container manager nodes in the cluster
>   [-confKey [key]]gets a specific key from the 
> configuration
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Patch Available  (was: Open)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644430#comment-16644430
 ] 

LiXin Ge commented on HDDS-517:
---

Thanks [~bharatviswa] for reviewing this. fixed version patch 002 is uploaded.
 > 1. This patch needs to be rebased on top of trunk.
 Done.

> 2. The setting of x-amz-request-id is not required
 Done.

> 3. Why do we need this check .header("Content-Length", body == null ? 0 : 
> length), and also why do we need OutputStream for HEADObject?
 Actually it's from [~elek] advice: {{3. I think Content-Length should be 0 in 
case of missing body.}}. I'm both OK with keep the OutputStream or not, patch 
002 removed the OutputStream, but that's better to hear from [~elek] if [~elek] 
has some other consideration.

> 4. Few observations is content type and content length returned for us is zero
 Done.

> 5. And also I think no need to add x-amz-version-id by default
 Done

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Status: Open  (was: Patch Available)

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-517:
--
Attachment: HDDS-517.002.patch

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch, 
> HDDS-517.002.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-522:

Attachment: HDDS-522.01.patch

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch, HDDS-522.01.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644424#comment-16644424
 ] 

Bharat Viswanadham commented on HDDS-522:
-

Fixed jenkins reported issues in patch v01.

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch, HDDS-522.01.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDFS-13942:
-
Attachment: HDFS-13942.002.patch
Status: Patch Available  (was: Open)

[~ajisakaa] Attached patch 002 for review that addresses the checkstyle issue 
generated by previous patch.

The javadoc error reported is due to MoreExecutors#newDirectExecutorService() 
is available in newer versions of the Guava Google core libraries and thus 
depending on the version, it shows up as warning.

 

Test failures are unrelated to the patch

 

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDFS-13942.001.patch, HDFS-13942.002.patch
>
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDFS-13942:
-
Status: Open  (was: Patch Available)

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDFS-13942.001.patch
>
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644379#comment-16644379
 ] 

Hadoop QA commented on HDDS-522:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-ozone_s3gateway generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943156/HDDS-522.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 75fc6d025424 

[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-09 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13926:
-
Fix Version/s: 3.1.2
   3.0.4

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926-branch-3.0-001.patch, HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-09 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13926:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Test failure look unrelated. Committed to branch-3.[0-1].

Thanks again, Hrishikesh!

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926-branch-3.0-001.patch, HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-587) Add new classes for pipeline management

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644358#comment-16644358
 ] 

Hadoop QA commented on HDDS-587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 24s{color} | {color:orange} root: The patch generated 5 new + 0 unchanged - 
0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 59s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 50s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.server.TestSCMChillModeManager |
|   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644350#comment-16644350
 ] 

Hadoop QA commented on HDDS-524:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} docker {color} | {color:blue}  0m  
5s{color} | {color:blue} Dockerfile 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build@2/sourcedir/dev-support/docker/Dockerfile'
 not found, falling back to built-in. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
49s{color} | {color:red} Docker failed to build yetus/hadoop:date2018-10-10. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-524 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943157/HDDS-524-docker-hadoop-runner.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1324/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
> Environment: {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-runner.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644348#comment-16644348
 ] 

Hadoop QA commented on HDFS-13942:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 1313 unchanged - 16 fixed = 1316 total (was 1329) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 
unchanged - 1 fixed = 1 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943139/HDFS-13942.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b8ec01aceb15 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6a39739 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| 

[jira] [Updated] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-524:
---
Attachment: HDDS-524-docker-hadoop-runner.001.patch
Status: Patch Available  (was: Open)

[~elek] - Thank you for reporting this. I have attached patch 001 where I am 
simply trying to do a recursive chown. Let me know if there are better ways to 
achieve this. Thanks!

> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
> Environment: {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-runner.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-522:

Status: Patch Available  (was: In Progress)

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-613:

Issue Type: Sub-task  (was: Task)
Parent: HDDS-434

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-613:
---

 Summary: Update  HeadBucket, DeleteBucket to not to have volume in 
path
 Key: HDDS-613
 URL: https://issues.apache.org/jira/browse/HDDS-613
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644343#comment-16644343
 ] 

Hudson commented on HDDS-443:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15163/])
HDDS-443. Create reusable ProgressBar utility for freon tests. (aengineer: rev 
f068296f8a88fc2a4c7b1680bc190c5fa7fc2469)
* (add) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
* (edit) hadoop-ozone/tools/pom.xml
* (add) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ProgressBar.java


> Create reusable ProgressBar utility for freon tests
> ---
>
> Key: HDDS-443
> URL: https://issues.apache.org/jira/browse/HDDS-443
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Zsolt Horvath
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-443.001.patch
>
>
> Since HDDS-398 we can support multiple type of freon tests. But to add more 
> test we need common utilities for generic task.
> One of the most important is to provide a reusable Progressbar utility.
> Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
> moved out from the class and all the thread start/stop logic should be moved 
> to the ProgressBar.
> {{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
> {{bar.start(); // thred should be started here}}{{bar.stop(); // thread 
> should be stopped.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644341#comment-16644341
 ] 

Bharat Viswanadham commented on HDDS-522:
-

Attached the patch to create a bucket with out need for volume creation.

Used new API's createS3Bucket from HDDS-577 jira.

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-522:

Attachment: HDDS-522.00.patch

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-522.00.patch
>
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13981) Review of AvailableSpaceResolver.java

2018-10-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644334#comment-16644334
 ] 

Íñigo Goiri commented on HDFS-13981:


I think [~linyiqun] had actually targeted the logic the tie logic in HDFS-13291.
Let him chime in.

> Review of AvailableSpaceResolver.java
> -
>
> Key: HDFS-13981
> URL: https://issues.apache.org/jira/browse/HDFS-13981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13981.1.patch
>
>
> * No behavior changes, just optimizing and paring down the code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2018-10-09 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644331#comment-16644331
 ] 

Weiwei Yang commented on HDFS-12459:


Thanks for revisiting this [~jojochuang], I too think the test failure are not 
related, I applied the patch and tested locally, they are working fine. Can we 
get this committed to trunk, [~jojochuang]?

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch, 
> HDFS-12459.006.patch, HDFS-12459.006.patch, HDFS-12459.007.patch, 
> HDFS-12459.008.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644327#comment-16644327
 ] 

Hadoop QA commented on HDDS-604:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
18s{color} | {color:green} The patch generated 0 new + 106 unchanged - 6 fixed 
= 106 total (was 112) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} docs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-604 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943147/HDDS-604.001.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux dd595c57b724 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4de2dc2 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1323/testReport/ |
| Max. process+thread count | 443 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common hadoop-ozone/docs U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1323/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-604.001.patch
>
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone 

[jira] [Commented] (HDFS-13981) Review of AvailableSpaceResolver.java

2018-10-09 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644328#comment-16644328
 ] 

BELUGA BEHR commented on HDFS-13981:


Working on unit test {{TestRouterAllResolver}}.  The test is a bit wonky.  I'm 
trying to see if I can approve it.  In the test, each subcluster has the same 
exact amount of available space.  Previously, the implementation was such that 
if there was a tie, a subcluster would be picked randomly, so the test passed.  
In my implementation, the first subcluster in the list of all subclusters is 
picked because they're all the same and it keeps with the design of "picking 
the node with the most available space."  Well, when they're all the same, the 
first node is as good as any other.  Except for the very initial install of the 
HDFS service, it is s unlikely that any two subclusters have the same exact 
amount of data (down to the byte) that the test hardly makes sense.  I'm trying 
to re-work the test to include some sizes.

> Review of AvailableSpaceResolver.java
> -
>
> Key: HDFS-13981
> URL: https://issues.apache.org/jira/browse/HDFS-13981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13981.1.patch
>
>
> * No behavior changes, just optimizing and paring down the code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644325#comment-16644325
 ] 

Hudson commented on HDDS-604:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15162/])
HDDS-604. Correct Ozone getOzoneConf description. Contributed by Dinesh 
(aengineer: rev 794c0451cffbe147234a2417943709c121d06620)
* (edit) hadoop-ozone/docs/content/CommandShell.md
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (edit) hadoop-ozone/common/src/main/bin/start-ozone.sh
* (edit) hadoop-ozone/common/src/main/bin/stop-ozone.sh


> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-604.001.patch
>
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone getozoneconf
> ozone getconf is utility for getting configuration information from the 
> config file.
> ozone getconf
>   [-includeFile]  gets the include file path that defines 
> the datanodes that can join the cluster.
>   [-excludeFile]  gets the exclude file path that defines 
> the datanodes that need to decommissioned.
>   [-ozonemanagers]gets list of Ozone Manager 
> nodes in the cluster
>   [-storagecontainermanagers] gets list of ozone 
> storage container manager nodes in the cluster
>   [-confKey [key]]gets a specific key from the 
> configuration
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-443:
--
   Resolution: Fixed
Fix Version/s: 0.3.0
   Status: Resolved  (was: Patch Available)

[~nandakumar131] [~elek] Thanks for the comments and help. [~horzsolt2006] 
Thanks for the contribution. Welcome to Ozone. Looking forward to more 
contributions in the future.

> Create reusable ProgressBar utility for freon tests
> ---
>
> Key: HDDS-443
> URL: https://issues.apache.org/jira/browse/HDDS-443
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Zsolt Horvath
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-443.001.patch
>
>
> Since HDDS-398 we can support multiple type of freon tests. But to add more 
> test we need common utilities for generic task.
> One of the most important is to provide a reusable Progressbar utility.
> Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
> moved out from the class and all the thread start/stop logic should be moved 
> to the ProgressBar.
> {{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
> {{bar.start(); // thred should be started here}}{{bar.stop(); // thread 
> should be stopped.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644312#comment-16644312
 ] 

Hadoop QA commented on HDFS-13926:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 2s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 77 
unchanged - 2 fixed = 77 total (was 79) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:1776208 |
| JIRA Issue | HDFS-13926 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943128/HDFS-13926-branch-3.0-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 317abe6e89fb 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-568) ozone sh volume info, update, delete operations fail when volume name is not prefixed by /

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644309#comment-16644309
 ] 

Hudson commented on HDDS-568:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15161 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15161/])
HDDS-568. ozone sh volume info, update, delete operations fail when (aengineer: 
rev 4de2dc2699fc371b2de83ba55ecbcecef1f0423b)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/InfoVolumeHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Handler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/DeleteVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/UpdateVolumeHandler.java


> ozone sh volume info, update, delete operations fail when volume name is not 
> prefixed by /
> --
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch, HDDS-568.002.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644306#comment-16644306
 ] 

Anu Engineer commented on HDDS-443:
---

+1, there are some CheckStyle issues. I will fix them while committing. We 
probably need some follow up patches to use this in the freon code base. 

 

> Create reusable ProgressBar utility for freon tests
> ---
>
> Key: HDDS-443
> URL: https://issues.apache.org/jira/browse/HDDS-443
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Zsolt Horvath
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-443.001.patch
>
>
> Since HDDS-398 we can support multiple type of freon tests. But to add more 
> test we need common utilities for generic task.
> One of the most important is to provide a reusable Progressbar utility.
> Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
> moved out from the class and all the thread start/stop logic should be moved 
> to the ProgressBar.
> {{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
> {{bar.start(); // thred should be started here}}{{bar.stop(); // thread 
> should be stopped.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-604:
--
   Resolution: Fixed
Fix Version/s: 0.3.0
   Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for the comments. [~hanishakoneru] Thanks for filing 
this issue. [~dineshchitlangia] Thanks for contribution. I have committed this 
to the trunk.

> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-604.001.patch
>
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone getozoneconf
> ozone getconf is utility for getting configuration information from the 
> config file.
> ozone getconf
>   [-includeFile]  gets the include file path that defines 
> the datanodes that can join the cluster.
>   [-excludeFile]  gets the exclude file path that defines 
> the datanodes that need to decommissioned.
>   [-ozonemanagers]gets list of Ozone Manager 
> nodes in the cluster
>   [-storagecontainermanagers] gets list of ozone 
> storage container manager nodes in the cluster
>   [-confKey [key]]gets a specific key from the 
> configuration
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644303#comment-16644303
 ] 

Anu Engineer commented on HDDS-604:
---

There is hardly any unit test for this feature. I have run an ozone cluster and 
hand tested this feature.
{code:java}

hadoop@34aa2ebc45f5:~$ ozone getconf -confKey ozone.enabled
true
hadoop@34aa2ebc45f5:~$ ozone getconf -ozonemanagers
ozoneManager{code}

> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-604.001.patch
>
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone getozoneconf
> ozone getconf is utility for getting configuration information from the 
> config file.
> ozone getconf
>   [-includeFile]  gets the include file path that defines 
> the datanodes that can join the cluster.
>   [-excludeFile]  gets the exclude file path that defines 
> the datanodes that need to decommissioned.
>   [-ozonemanagers]gets list of Ozone Manager 
> nodes in the cluster
>   [-storagecontainermanagers] gets list of ozone 
> storage container manager nodes in the cluster
>   [-confKey [key]]gets a specific key from the 
> configuration
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-559) fs.default.name is deprecated

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644294#comment-16644294
 ] 

Hudson commented on HDDS-559:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15160 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15160/])
HDDS-559. fs.default.name is deprecated. Contributed by  Dinesh (aengineer: rev 
6a06bc309d72c766694eb6296d5f3fb5c3c597c5)
* (edit) hadoop-ozone/docs/content/OzoneFS.md


> fs.default.name is deprecated
> -
>
> Key: HDDS-559
> URL: https://issues.apache.org/jira/browse/HDDS-559
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-559.001.patch
>
>
> {{fs.default.name}} is deprecated. Docs should be updated to use 
> {{fs.defaultFS}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644295#comment-16644295
 ] 

Hudson commented on HDFS-11396:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15160 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15160/])
HDFS-11396. TestNameNodeMetadataConsistency#testGenerationStampInFuture 
(inigoiri: rev 605622c87bc109f60ee1674be37a526e44723b67)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMetadataConsistency.java


> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: John Zhuge
>Assignee: Ayush Saxena
>Priority: Minor
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-11396-01.patch, 
> patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644296#comment-16644296
 ] 

Dinesh Chitlangia commented on HDDS-478:


[~anu] - thanks for the commit

> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-478.001.patch
>
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDDS-559) fs.default.name is deprecated

2018-10-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644297#comment-16644297
 ] 

Dinesh Chitlangia commented on HDDS-559:


[~anu] - thanks for the commit.

> fs.default.name is deprecated
> -
>
> Key: HDDS-559
> URL: https://issues.apache.org/jira/browse/HDDS-559
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-559.001.patch
>
>
> {{fs.default.name}} is deprecated. Docs should be updated to use 
> {{fs.defaultFS}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-568) ozone sh volume info, update, delete operations fail when volume name is not prefixed by /

2018-10-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644293#comment-16644293
 ] 

Dinesh Chitlangia commented on HDDS-568:


[~anu] thanks for the commit

> ozone sh volume info, update, delete operations fail when volume name is not 
> prefixed by /
> --
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch, HDDS-568.002.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-568) ozone sh volume info, update, delete operations fail when volume name is not prefixed by /

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-568:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ssulav] Thanks for filing the issue. [~arpitagarwal] Thanks for the comments. 
[~ljain] Thanks for the reviews. [~dineshchitlangia] Thanks for the 
contribution, I have committed this patch to trunk.

> ozone sh volume info, update, delete operations fail when volume name is not 
> prefixed by /
> --
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch, HDDS-568.002.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-604:
---
Attachment: HDDS-604.001.patch
Status: Patch Available  (was: Open)

[~hanishakoneru] - Attached patch 001 for your review. Verified this using 
docker and running ozone from sources.

> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-604.001.patch
>
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone getozoneconf
> ozone getconf is utility for getting configuration information from the 
> config file.
> ozone getconf
>   [-includeFile]  gets the include file path that defines 
> the datanodes that can join the cluster.
>   [-excludeFile]  gets the exclude file path that defines 
> the datanodes that need to decommissioned.
>   [-ozonemanagers]gets list of Ozone Manager 
> nodes in the cluster
>   [-storagecontainermanagers] gets list of ozone 
> storage container manager nodes in the cluster
>   [-confKey [key]]gets a specific key from the 
> configuration
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644278#comment-16644278
 ] 

Hudson commented on HDDS-478:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15159 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15159/])
HDDS-478. Log files related to each daemon doesn't have proper startup 
(aengineer: rev c1fe657a106aaae3bdf81fa4add70962aaee165b)
* (edit) hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties


> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-478.001.patch
>
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Created] (HDDS-612) Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock fails with ChillModePrecheck exception

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-612:
-

 Summary: Even after setting hdds.scm.chillmode.enabled to false, 
SCM allocateblock fails with ChillModePrecheck exception
 Key: HDDS-612
 URL: https://issues.apache.org/jira/browse/HDDS-612
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:11:58,047 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 
on 9863, call Call#70 Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.27.56.9:53442
org.apache.hadoop.hdds.scm.exceptions.SCMException: ChillModePrecheck failed 
for allocateBlock
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:38)
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:30)
at org.apache.hadoop.hdds.scm.ScmUtils.preCheck(ScmUtils.java:42)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:191)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6255)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-11396:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   3.2.0
   Status: Resolved  (was: Patch Available)

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: John Zhuge
>Assignee: Ayush Saxena
>Priority: Minor
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-11396-01.patch, 
> patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-587) Add new classes for pipeline management

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644271#comment-16644271
 ] 

Anu Engineer commented on HDDS-587:
---

Lot of test failures seem not related to this patch. I have asked for one more 
Jenkins build, just to be sure.

[https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDDS-Build/detail/PreCommit-HDDS-Build/1322/pipeline]

 

> Add new classes for pipeline management
> ---
>
> Key: HDDS-587
> URL: https://issues.apache.org/jira/browse/HDDS-587
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-587.001.patch, HDDS-587.002.patch
>
>
> This Jira adds new classes and corresponding unit tests for pipeline 
> management in SCM. The old classes will be removed in a subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7717) Erasure Coding: distribute replication to EC conversion work to DataNode

2018-10-09 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644268#comment-16644268
 ] 

Xiao Chen commented on HDFS-7717:
-

Thanks [~knanasi] for the new summary.

I think this is kind of already exists as distcp plus manual removal. But for 
true automation, I agree with you SPS seems to be the best place. It already 
has this directory traversal and much of the conversion path, so the ec 
converter would be a new policy with a more complicated error handling. I 
propose we investigate more into having this as a follow-on item for 
HDFS-10285. [~umamaheswararao], [~Sammi], [~rakeshr], [~andrew.wang] what do 
you think?

> Erasure Coding: distribute replication to EC conversion work to DataNode
> 
>
> Key: HDFS-7717
> URL: https://issues.apache.org/jira/browse/HDFS-7717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Sammi Chen
>Priority: Major
>
> In *stripping* erasure coding case, we need some approach to distribute 
> conversion work between replication and stripping erasure coding to DataNode. 
> It can be NameNode, or a tool utilizing MR just like the current distcp, or 
> another one like the balancer/mover. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644267#comment-16644267
 ] 

Íñigo Goiri commented on HDFS-11396:


Thanks [~ayushtkn] for  [^HDFS-11396-01.patch].
It seems like it passed:
https://builds.apache.org/job/PreCommit-HDFS-Build/25243/testReport/org.apache.hadoop.hdfs.server.namenode/TestNameNodeMetadataConsistency/
+1
Committing.

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: John Zhuge
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-11396-01.patch, 
> patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-559) fs.default.name is deprecated

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644266#comment-16644266
 ] 

Anu Engineer edited comment on HDDS-559 at 10/9/18 11:59 PM:
-

[~arpitagarwal] Thanks for filing this issue. [~bharatviswa] / 
[~rshbhmptl]Thanks for the comments. I have tested with  Hugo and committed 
this to the trunk.


was (Author: anu):
[~arpitagarwal] Thanks for filing this issue. [~bharatviswa] / 
[~rshbhmptl]Thanks for the comments. I have tested with with Hugo and committed 
this to the trunk.

> fs.default.name is deprecated
> -
>
> Key: HDDS-559
> URL: https://issues.apache.org/jira/browse/HDDS-559
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-559.001.patch
>
>
> {{fs.default.name}} is deprecated. Docs should be updated to use 
> {{fs.defaultFS}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-559) fs.default.name is deprecated

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-559:
--
   Resolution: Fixed
Fix Version/s: 0.3.0
   Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for filing this issue. [~bharatviswa] / 
[~rshbhmptl]Thanks for the comments. I have tested with with Hugo and committed 
this to the trunk.

> fs.default.name is deprecated
> -
>
> Key: HDDS-559
> URL: https://issues.apache.org/jira/browse/HDDS-559
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-559.001.patch
>
>
> {{fs.default.name}} is deprecated. Docs should be updated to use 
> {{fs.defaultFS}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-478:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~hanishakoneru] Thanks for the comments. [~dineshchitlangia] Thanks for fixing 
the issue. [~nilotpalnandi] Thanks for filing this issue. I have committed this 
patch to trunk.

> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-478.001.patch
>
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Created] (HDDS-611) SCM UI is not reflecting the changes done in ozone-site.xml

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-611:
-

 Summary: SCM UI is not reflecting the changes done in 
ozone-site.xml
 Key: HDDS-611
 URL: https://issues.apache.org/jira/browse/HDDS-611
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-09 at 4.49.58 PM.png

ozone-site.xml was updated to change hdds.scm.chillmode.enabled to false. This 
is reflected properly as below:
{code:java}
[root@ctr-e138-1518143905142-510793-01-04 bin]# ./ozone getozoneconf 
-confKey hdds.scm.chillmode.enabled
2018-10-09 23:52:12,621 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
false
{code}
But the SCM UI does not reflect this change and it still shows the old value of 
true. Please see attached screenshot. !Screen Shot 2018-10-09 at 4.49.58 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-559) fs.default.name is deprecated

2018-10-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644262#comment-16644262
 ] 

Dinesh Chitlangia commented on HDDS-559:


[~arpitagarwal] - Thank you for reporting this. Attached patch 001 for review.

> fs.default.name is deprecated
> -
>
> Key: HDDS-559
> URL: https://issues.apache.org/jira/browse/HDDS-559
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-559.001.patch
>
>
> {{fs.default.name}} is deprecated. Docs should be updated to use 
> {{fs.defaultFS}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-559) fs.default.name is deprecated

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-559:
---
Attachment: HDDS-559.001.patch
Status: Patch Available  (was: Open)

> fs.default.name is deprecated
> -
>
> Key: HDDS-559
> URL: https://issues.apache.org/jira/browse/HDDS-559
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-559.001.patch
>
>
> {{fs.default.name}} is deprecated. Docs should be updated to use 
> {{fs.defaultFS}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-610) On restart of SCM it fails to register DataNodes

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-610:
-

 Summary: On restart of SCM it fails to register DataNodes
 Key: HDDS-610
 URL: https://issues.apache.org/jira/browse/HDDS-610
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:34:11,105 INFO 
org.apache.hadoop.hdds.scm.server.StorageContainerManager: STARTUP_MSG:
/
STARTUP_MSG: Starting StorageContainerManager
STARTUP_MSG: host = 
ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.3.0-SNAPSHOT
STARTUP_MSG: classpath = 

[jira] [Commented] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644257#comment-16644257
 ] 

Dinesh Chitlangia commented on HDDS-604:


[~arpitagarwal] - I presume you don't want camel case.

> Correct Ozone getOzoneConf description 
> ---
>
> Key: HDDS-604
> URL: https://issues.apache.org/jira/browse/HDDS-604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
>
> The {{./ozone getozoneconf}} subcommand description mentions the subcommand 
> as {{getconf}}. We should consistently call it either {{getozoneconf}} or 
> {{getconf}} at both places.
> {code:java}
> $ bin/ozone getozoneconf
> ozone getconf is utility for getting configuration information from the 
> config file.
> ozone getconf
>   [-includeFile]  gets the include file path that defines 
> the datanodes that can join the cluster.
>   [-excludeFile]  gets the exclude file path that defines 
> the datanodes that need to decommissioned.
>   [-ozonemanagers]gets list of Ozone Manager 
> nodes in the cluster
>   [-storagecontainermanagers] gets list of ozone 
> storage container manager nodes in the cluster
>   [-confKey [key]]gets a specific key from the 
> configuration
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-609) Mapreduce example fails with Allocate block failed, error:INTERNAL_ERROR

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-609:
-

 Summary: Mapreduce example fails with Allocate block failed, 
error:INTERNAL_ERROR
 Key: HDDS-609
 URL: https://issues.apache.org/jira/browse/HDDS-609
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job5
18/10/09 23:37:07 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:37:08 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:37:09 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0007
18/10/09 23:37:09 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:37:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:37:09 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0007
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0007
18/10/09 23:37:10 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0007/
18/10/09 23:37:10 INFO mapreduce.Job: Running job: job_1539125785626_0007
18/10/09 23:37:17 INFO mapreduce.Job: Job job_1539125785626_0007 running in 
uber mode : false
18/10/09 23:37:17 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:37:24 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:37:29 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_0, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:475)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:37:35 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_1, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 

[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-09 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644255#comment-16644255
 ] 

Xiao Chen commented on HDFS-13926:
--

+1 pending jenkins

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926-branch-3.0-001.patch, HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13949) Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml

2018-10-09 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644253#comment-16644253
 ] 

Toshihiro Suzuki commented on HDFS-13949:
-

Thank you very much for reviewing [~nandakumar131].

The property has been also used in ThrottledAsyncChecker that's initialized in 
the constructor of DatasetVolumeChecker:
{code}
diskCheckTimeout = conf.getTimeDuration(
DFSConfigKeys.DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY,
DFSConfigKeys.DFS_DATANODE_DISK_CHECK_TIMEOUT_DEFAULT,
TimeUnit.MILLISECONDS);

delegateChecker = new ThrottledAsyncChecker<>(
timer, minDiskCheckGapMs, diskCheckTimeout,
Executors.newCachedThreadPool(
new ThreadFactoryBuilder()
.setNameFormat("DataNode DiskChecker thread %d")
.setDaemon(true)
.build()));
{code}
This timeout is used in ThrottledAsyncChecker#schedule. And this method is 
called by DatasetVolumeChecker#checkVolume. DatasetVolumeChecker#checkVolume is 
called by DataNode#checkDiskErrorAsync that's called when there might possibly 
be a disk failure. So it looks like to me the property is not only used during 
DataNode startup.

> Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml
> --
>
> Key: HDFS-13949
> URL: https://issues.apache.org/jira/browse/HDFS-13949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: HDFS-13949.1.patch
>
>
> The description of dfs.datanode.disk.check.timeout in hdfs-default.xml is as 
> follows:
> {code}
> 
>   dfs.datanode.disk.check.timeout
>   10m
>   
> Maximum allowed time for a disk check to complete during DataNode
> startup. If the check does not complete within this time interval
> then the disk is declared as failed. This setting supports
> multiple time unit suffixes as described in dfs.heartbeat.interval.
> If no suffix is specified then milliseconds is assumed.
>   
> 
> {code}
> I don't think the value of this config is used only during DataNode startup. 
> I think it's used whenever checking volumes.
> The description is misleading so we need to correct it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-522) Implement PutBucket REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-522:

Description: 
The create bucket creates a bucket using createS3Bucket which has been added as 
part of HDDS-577.

[https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]

Stub implementation is created as part of HDDS-444. Need to finalize, check the 
missing headers, add acceptance tests.

  was:
The create bucket creates a bucket for the give volume.

https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html

Stub implementation is created as part of HDDS-444. Need to finalize, check the 
missing headers, add acceptance tests.


> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> The create bucket creates a bucket using createS3Bucket which has been added 
> as part of HDDS-577.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html]
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-608) Mapreduce example fails with Access denied for user hdfs. Superuser privilege is required

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-608:
-

 Summary: Mapreduce example fails with Access denied for user hdfs. 
Superuser privilege is required
 Key: HDDS-608
 URL: https://issues.apache.org/jira/browse/HDDS-608
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Right now only the administrators can submit a MR job. All the other users 
including hdfs will fail with below error:
{code:java}
-bash-4.2$ ./ozone sh bucket create /volume2/bucket2
2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
with Versioning false and Storage Type set to DISK
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0003
18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0003/
18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
uber mode : false
18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_00_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 

[jira] [Updated] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-09 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDFS-13942:
-
Attachment: HDFS-13942.001.patch
Status: Patch Available  (was: In Progress)

[~ajisakaa] Thank you for reporting the issue. I have posted an initial patch 
for jenkins run.

Following two things might cause it to fail:

1. I am not able to see StoragePolicySatisfyWorker in trunk and thus this error 
appears to linger
{code:java}
[ERROR] 
/Users/dchitlangia/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockStorageMovementCommand.java:33:
 error: reference not found
[ERROR] * {@link 
org.apache.hadoop.hdfs.server.datanode.StoragePolicySatisfyWorker}{code}

2. MoreExecutors#newDirectExecutorService() is available in newer versions of 
the Guava Google core libraries and thus depending on the version, the error 
may persist.
{code:java}
[ERROR] 
/Users/dchitlangia/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java:1273:
 error: reference not found
[ERROR] * {@link MoreExecutors#newDirectExecutorService()}{code}
Let me know your thoughts/inputs. Thank you!

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDFS-13942.001.patch
>
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-523) Implement DeleteObject REST endpoint

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644211#comment-16644211
 ] 

Anu Engineer commented on HDDS-523:
---

The code changes look correct, but some issues in testing results. I would be 
good to verify if these are expected results.
 # Calling the delete with wrong volume – the bucket exists on vol2/bucket2.

{code:java}
aws s3 --endpoint-url http://localhost:9878/vol1 rm  s3://bucket2/dir1/
delete failed: s3://bucket2/dir1/ An error occurred (404) when calling the 
DeleteObject operation: Not Found{code}
This seems like the right thing to do.

2. Calling with wrong bucket name – bucket1 instead of bucket2.
{code:java}
aengineer@HW11767 ~/a/h/h/d/t/o/c/ozones3> aws s3 --endpoint-url 
http://localhost:9878/vol2 rm  s3://bucket1/dir1/
delete failed: s3://bucket1/dir1/ An error occurred (500) when calling the 
DeleteObject operation (reached max retries: 4): Internal Server Error{code}
It failed with internal server error. Perhaps we are getting an error in

_OzoneBucket bucket = getBucket(volumeName, bucketName);_

_and we are not handling it correctly. It might be a good idea to add some 
logging either debug or in the case of error._

_3. A partial name of the object should not cause a delete. I am not able to 
explain what happened here._
{code:java}
aengineer@HW11767 ~/a/h/h/d/t/o/c/ozones3> aws s3 --endpoint-url 
http://localhost:9878/vol2 rm  s3://bucket2/dir2/
delete: s3://bucket2/dir2/{code}
The object name is aws s3 --endpoint-url http://localhost:9878/vol1 cp 
docker-config  *s3://bucket2/dir1/file1*

So I was expecting this command to fail rather than succeeed.
{code:java}
aengineer@HW11767 ~/a/h/h/d/t/o/c/ozones3> aws s3 --endpoint-url 
http://localhost:9878/vol2 ls  s3://bucket1/dir1/

An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The 
specified bucket does not exist{code}
The ls call after the delete fails with saying delete bucket happened. I am 
guessing we have a bug in the delete bucket path and this call landed there.

> Implement DeleteObject REST endpoint
> 
>
> Key: HDDS-523
> URL: https://issues.apache.org/jira/browse/HDDS-523
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-523.001.patch, HDDS-523.002.patch
>
>
> Simple delete Object call.
> Implemented by HDDS-444 without the acceptance tests.
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644202#comment-16644202
 ] 

Hadoop QA commented on HDFS-11396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-11396 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943101/HDFS-11396-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 818d1ebc5d4e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bf04f19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25243/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25243/testReport/ |
| Max. process+thread count | 3398 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644195#comment-16644195
 ] 

Hudson commented on HDDS-583:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15158 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15158/])
HDDS-583. SCM returns zero as the return code, even when invalid options 
(bharat: rev 6a39739316795a4828833e99d78aadc684270f98)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java


> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-490) Improve om and scm start up options

2018-10-09 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644185#comment-16644185
 ] 

Namit Maheshwari commented on HDDS-490:
---

Hi [~arpitagarwal] - I have tested both the above scenarios.

It works fine. Thanks

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644165#comment-16644165
 ] 

Bharat Viswanadham commented on HDDS-583:
-

Thank You [~nmaheshwari] for bringing up that this has not been committed.

I have committed this change to trunk and ozone-0.3 branch. 

> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-09 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644167#comment-16644167
 ] 

Hrishikesh Gadre commented on HDFS-13926:
-

[~xiaochen] please find the attached patch for branch-3.0. Note that since  
HDFS-13468 and HADOOP-15507 are not available in branch-3.0, I removed the 
logic to populate ecBytesRead metric. Please take a look and let me know your 
feedback.

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926-branch-3.0-001.patch, HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-583:

Fix Version/s: 0.4.0

> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-09 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13926:

Attachment: HDFS-13926-branch-3.0-001.patch

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926-branch-3.0-001.patch, HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-10-09 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644131#comment-16644131
 ] 

Xiao Chen commented on HDFS-13697:
--

I had some more discussions with Zsolt about this jira. Our impression is that 
the fix here is making KMS stuff better, without worsening the 
DelegationTokenAuthenticatedURL issue Daryn mentioned. Zsolt also mentioned 
about testing this in test clusters with downstream before.

[~zvenczel], could you comment on what downstream scenarios you have tested? We 
probably need [~xyao]'s help on reviewing if any HDP usages fall out of that 
coverage.

But assuming the above goes well, it seems reasonable to move this forward 
unless [~daryn] strongly objects.

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch, HDFS-13697.07.patch, HDFS-13697.08.patch, 
> HDFS-13697.09.patch, HDFS-13697.10.patch, HDFS-13697.11.patch, 
> HDFS-13697.12.patch, HDFS-13697.prelim.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
> [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
> [encrypted_key]!!
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>  at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565)
>  at 
> 

[jira] [Comment Edited] (HDDS-490) Improve om and scm start up options

2018-10-09 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644120#comment-16644120
 ] 

Arpit Agarwal edited comment on HDDS-490 at 10/9/18 9:33 PM:
-

Looks like the docker image is already updated with the HDDS-564 changes.


was (Author: arpitagarwal):
Looks like the docker image is already updated with the HDDS-490 changes.

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-09 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644124#comment-16644124
 ] 

Namit Maheshwari commented on HDDS-564:
---

[~arpitagarwal] - Even I am not able to change it from sub-task to an issue

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch, 
> HDDS-564-docker-hadoop-runner.002.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644123#comment-16644123
 ] 

Namit Maheshwari commented on HDDS-600:
---

Thanks [~hanishakoneru]. With the correct URL Mapreduce job fails as below:
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# su - hdfs
Last login: Tue Oct 9 07:11:08 UTC 2018
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket1.volume1/mr_job_dir/output
18/10/09 20:24:07 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:08 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 20:24:09 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 20:24:09 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539069219098_0001
18/10/09 20:24:09 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 20:24:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 20:24:09 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 20:24:10 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 20:24:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539069219098_0001
18/10/09 20:24:10 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 20:24:11 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:11 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 20:24:11 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:11 INFO impl.YarnClientImpl: Submitted application 
application_1539069219098_0001
18/10/09 20:24:11 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539069219098_0001/
18/10/09 20:24:11 INFO mapreduce.Job: Running job: job_1539069219098_0001
18/10/09 20:25:04 INFO mapreduce.Job: Job job_1539069219098_0001 running in 
uber mode : false
18/10/09 20:25:04 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 20:25:04 INFO mapreduce.Job: Job job_1539069219098_0001 failed with 
state FAILED due to: Application application_1539069219098_0001 failed 20 times 
due to AM Container for appattempt_1539069219098_0001_20 exited with 
exitCode: 1
Failing this attempt.Diagnostics: [2018-10-09 20:25:04.763]Exception from 
container-launch.
Container id: container_e03_1539069219098_0001_20_03
Exit code: 1

[2018-10-09 20:25:04.765]Container exited with a non-zero exit code 1. Error 
file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.


[2018-10-09 20:25:04.765]Container exited with a non-zero exit code 1. Error 
file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.


For more detailed output, check the application tracking page: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/cluster/app/application_1539069219098_0001
 Then click on links to logs of each attempt.
. Failing the application.
18/10/09 20:25:05 INFO mapreduce.Job: Counters: 0
18/10/09 20:25:05 INFO conf.Configuration: Removed undeclared tags:
{code}
Yarn container logs:
{code:java}
Application
Tools
Configuration
Local logs
Server stacks
Server metrics
Log Type: directory.info

Log Upload Time: Tue Oct 09 20:25:06 + 2018

Log Length: 20398

Showing 4096 bytes of 20398 total. Click here for the full log.

06:50 ./mr-framework/hadoop/lib/native/libsnappy.so.1
8651115 3324 -r-xr-xr-x   2 yarn hadoop3402313 Oct  8 06:38 
./mr-framework/hadoop/lib/native/libnativetask.so
86510584 drwxr-xr-x   3 yarn hadoop   4096 Oct  8 06:32 
./mr-framework/hadoop/sbin
86510914 -r-xr-xr-x   1 yarn hadoop   3898 Oct  8 06:33 
./mr-framework/hadoop/sbin/stop-dfs.sh
86510844 -r-xr-xr-x   1 yarn hadoop   1756 Oct  8 06:33 
./mr-framework/hadoop/sbin/stop-secure-dns.sh
86510814 -r-xr-xr-x   1 yarn hadoop   2166 Oct  8 06:32 
./mr-framework/hadoop/sbin/stop-all.sh
86510964 -r-xr-xr-x   1 yarn hadoop   1779 

[jira] [Commented] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-09 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644121#comment-16644121
 ] 

Arpit Agarwal commented on HDDS-564:


[~nmaheshwari] can you convert this from sub-task to an issue? For some reason 
I am unable to do so.

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch, 
> HDDS-564-docker-hadoop-runner.002.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-490) Improve om and scm start up options

2018-10-09 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644120#comment-16644120
 ] 

Arpit Agarwal commented on HDDS-490:


Looks like the docker image is already updated with the HDDS-490 changes.

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-607) Support S3 testing via MiniOzoneCluster

2018-10-09 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-607:
-

 Summary: Support S3 testing via MiniOzoneCluster
 Key: HDDS-607
 URL: https://issues.apache.org/jira/browse/HDDS-607
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


To write normal unit tests we need support of S3Gateway along with 
MiniOzoneCluster. This Jira proposes to add that. This will allow us to write 
simple unit tests using AWS S3 Java SDK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-490) Improve om and scm start up options

2018-10-09 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644112#comment-16644112
 ] 

Arpit Agarwal commented on HDDS-490:


[~nmaheshwari] let's wait until the docker image gets updated. I believe that 
happens automagically in a few hours:
https://hub.docker.com/r/apache/hadoop-runner/builds/b5zvwpodxpwpp5e65zfb86u/

Then let's test the following:
# Smoke tests pass on trunk without your patch
# Smoke tests pass with your patch


> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-11396:
--

Assignee: Ayush Saxena

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: John Zhuge
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-11396-01.patch, 
> patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-09 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-564:
---
  Resolution: Fixed
Target Version/s:   (was: 0.3.0)
  Status: Resolved  (was: Patch Available)

+1 I've committed this. Thanks [~nmaheshwari].

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch, 
> HDDS-564-docker-hadoop-runner.002.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644103#comment-16644103
 ] 

Hudson commented on HDDS-577:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15157 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15157/])
HDDS-577. Support S3 buckets as first class objects in Ozone Manager - 
(aengineer: rev 5b7ba48cedb0d70ca154771fec48e5c4129cf29a)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java


> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-606) Create delete s3Bucket

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-606:

Issue Type: Sub-task  (was: Task)
Parent: HDDS-434

> Create delete s3Bucket
> --
>
> Key: HDDS-606
> URL: https://issues.apache.org/jira/browse/HDDS-606
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> We should have a new API to delete buckets created via S3.
> As this delete should actually delete the bucket from bucket table and also 
> mapping for S3Table in ozone manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-606) Create delete s3Bucket

2018-10-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-606:
---

 Summary: Create delete s3Bucket
 Key: HDDS-606
 URL: https://issues.apache.org/jira/browse/HDDS-606
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


We should have a new API to delete buckets created via S3.

As this delete should actually delete the bucket from bucket table and also 
mapping for S3Table in ozone manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644069#comment-16644069
 ] 

Bharat Viswanadham edited comment on HDDS-517 at 10/9/18 8:54 PM:
--

Hi [~GeLiXin]

Thank You for the updated patch.

Few comments:
 # This patch needs to be rebased on top of trunk.
 # The setting of x-amz-request-id is not required, as this is handled as a 
common thing for all request as part of HDDS-585
 # Why do we need this check .header("Content-Length", body == null ? 0 : 
length), and also why do we need OutputStream for HEADObject? (Is this added 
when range comes in to picture)
 # During my testing (I have done some changes to make code compile in my 
local), I see a slight change in the output from s3 and our S3Gateway. Few 
observations is content type and content length returned for us is zero, it 
should be the key length. For no key, now we are returning 404.
 # And also I think no need to add x-amz-version-id by default, when I have 
copied a key and see head-object, I don't see version id in default response. 
(This one I have checked, after setting up mitmproxy)

 
{code:java}
S3 Gateway:
HW13865:hadoop bviswanadham$ aws s3api 
--endpoint-url=http://localhost:9878/volume1 head-object --bucket bucket1 --key 
dt
{
    "ContentType": "text/plain", 
    "ContentLength": 0, 
    "Expires": "Tue, 09 Oct 2018 20:38:09 GMT", 
    "ETag": "1539117326439", 
    "CacheControl": "no-cache", 
    "Metadata": {}
}
Amazon S3
HW13865:hadoop bviswanadham$ aws s3api --no-verify-ssl head-object --bucket 
my-bucket-09-09 --key tmp/dt

{

    "AcceptRanges": "bytes",

    "ContentType": "binary/octet-stream",

    "LastModified": "Tue, 09 Oct 2018 20:05:13 GMT",

    "ContentLength": 5,

    "ETag": "\"a9564ebc3289b7a14551baf8ad5ec60a\"",

    "Metadata": {}

}


{code}
 


was (Author: bharatviswa):
Hi [~GeLiXin]

Thank You for the updated patch.

Few comments:
 # This patch needs to be rebased on top of trunk.
 # The setting of x-amz-request-id is not required, as this is handled as a 
common thing for all request as part of HDDS-585
 # Why do we need this check .header("Content-Length", body == null ? 0 : 
length), and also why do we need OutputStream for HEADObject? (Is this added 
when range comes in to picture)
 # During my testing (I have done some changes to make code compile in my 
local), I see a slight change in the output from s3 and our S3Gateway. Few 
observations is content type and content length returned for us is zero, it 
should be the key length. For no key, now we are returning 404.
 # And also I think no need to add x-amz-version-id by default, when I have 
copied a key and see head-object, I don't see version id in default response. 
(This one I have checked, after setting up mitmproxy)

 
{code:java}
S3 Gateway:
HW13865:hadoop bviswanadham$ aws s3api 
--endpoint-url=http://localhost:9878/volume1 head-object --bucket bucket1 --key 
dt
{
    "ContentType": "text/plain", 
    "ContentLength": 0, 
    "Expires": "Tue, 09 Oct 2018 20:38:09 GMT", 
    "ETag": "1539117326439", 
    "CacheControl": "no-cache", 
    "Metadata": {}
}
Amazon S3
HW13865:hadoop bviswanadham$ aws s3api --no-verify-ssl head-object --bucket 
my-bucket-09-09 --key tmp/dt --range 124bytes
/usr/local/aws/lib/python2.7/site-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
{
    "AcceptRanges": "bytes", 
    "ContentType": "binary/octet-stream", 
    "LastModified": "Tue, 09 Oct 2018 20:05:13 GMT", 
    "ContentLength": 5, 
    "ETag": "\"a9564ebc3289b7a14551baf8ad5ec60a\"", 
    "Metadata": {}
}
{code}
 

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Comment Edited] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644069#comment-16644069
 ] 

Bharat Viswanadham edited comment on HDDS-517 at 10/9/18 8:50 PM:
--

Hi [~GeLiXin]

Thank You for the updated patch.

Few comments:
 # This patch needs to be rebased on top of trunk.
 # The setting of x-amz-request-id is not required, as this is handled as a 
common thing for all request as part of HDDS-585
 # Why do we need this check .header("Content-Length", body == null ? 0 : 
length), and also why do we need OutputStream for HEADObject? (Is this added 
when range comes in to picture)
 # During my testing (I have done some changes to make code compile in my 
local), I see a slight change in the output from s3 and our S3Gateway. Few 
observations is content type and content length returned for us is zero, it 
should be the key length. For no key, now we are returning 404.
 # And also I think no need to add x-amz-version-id by default, when I have 
copied a key and see head-object, I don't see version id in default response. 
(This one I have checked, after setting up mitmproxy)

 
{code:java}
S3 Gateway:
HW13865:hadoop bviswanadham$ aws s3api 
--endpoint-url=http://localhost:9878/volume1 head-object --bucket bucket1 --key 
dt
{
    "ContentType": "text/plain", 
    "ContentLength": 0, 
    "Expires": "Tue, 09 Oct 2018 20:38:09 GMT", 
    "ETag": "1539117326439", 
    "CacheControl": "no-cache", 
    "Metadata": {}
}
Amazon S3
HW13865:hadoop bviswanadham$ aws s3api --no-verify-ssl head-object --bucket 
my-bucket-09-09 --key tmp/dt --range 124bytes
/usr/local/aws/lib/python2.7/site-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
{
    "AcceptRanges": "bytes", 
    "ContentType": "binary/octet-stream", 
    "LastModified": "Tue, 09 Oct 2018 20:05:13 GMT", 
    "ContentLength": 5, 
    "ETag": "\"a9564ebc3289b7a14551baf8ad5ec60a\"", 
    "Metadata": {}
}
{code}
 


was (Author: bharatviswa):
Hi [~GeLiXin]

Thank You for the updated patch.

Few comments:
 # This patch needs to be rebased on top of trunk.
 # The setting of x-amz-request-id is not required, as this is handled as a 
common thing for all request as part of HDDS-585
 # Why do we need this check .header("Content-Length", body == null ? 0 : 
length), and also why do we need OutputStream for HEADObject? (Is this added 
when range comes in to picture)
 # During my testing (I have done some changes to make code compile in my 
local), I see a slight change in the output from s3 and our S3Gateway. Few 
observations is content type and content length returned for us is zero, it 
should be the key length. For no key, now we are returning 404.

 
{code:java}
S3 Gateway:
HW13865:hadoop bviswanadham$ aws s3api 
--endpoint-url=http://localhost:9878/volume1 head-object --bucket bucket1 --key 
dt
{
    "ContentType": "text/plain", 
    "ContentLength": 0, 
    "Expires": "Tue, 09 Oct 2018 20:38:09 GMT", 
    "ETag": "1539117326439", 
    "CacheControl": "no-cache", 
    "Metadata": {}
}
Amazon S3
HW13865:hadoop bviswanadham$ aws s3api --no-verify-ssl head-object --bucket 
my-bucket-09-09 --key tmp/dt --range 124bytes
/usr/local/aws/lib/python2.7/site-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
{
    "AcceptRanges": "bytes", 
    "ContentType": "binary/octet-stream", 
    "LastModified": "Tue, 09 Oct 2018 20:05:13 GMT", 
    "ContentLength": 5, 
    "ETag": "\"a9564ebc3289b7a14551baf8ad5ec60a\"", 
    "Metadata": {}
}
{code}
 

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-577:
--
   Resolution: Fixed
Fix Version/s: 0.3.0
   Status: Resolved  (was: Patch Available)

[~bharatviswa] Thanks for the contribution. I have committed this patch to 
trunk. [~elek] Thanks for the comments.

 

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-517) Implement HeadObject REST endpoint

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644069#comment-16644069
 ] 

Bharat Viswanadham commented on HDDS-517:
-

Hi [~GeLiXin]

Thank You for the updated patch.

Few comments:
 # This patch needs to be rebased on top of trunk.
 # The setting of x-amz-request-id is not required, as this is handled as a 
common thing for all request as part of HDDS-585
 # Why do we need this check .header("Content-Length", body == null ? 0 : 
length), and also why do we need OutputStream for HEADObject? (Is this added 
when range comes in to picture)
 # During my testing (I have done some changes to make code compile in my 
local), I see a slight change in the output from s3 and our S3Gateway. Few 
observations is content type and content length returned for us is zero, it 
should be the key length. For no key, now we are returning 404.

 
{code:java}
S3 Gateway:
HW13865:hadoop bviswanadham$ aws s3api 
--endpoint-url=http://localhost:9878/volume1 head-object --bucket bucket1 --key 
dt
{
    "ContentType": "text/plain", 
    "ContentLength": 0, 
    "Expires": "Tue, 09 Oct 2018 20:38:09 GMT", 
    "ETag": "1539117326439", 
    "CacheControl": "no-cache", 
    "Metadata": {}
}
Amazon S3
HW13865:hadoop bviswanadham$ aws s3api --no-verify-ssl head-object --bucket 
my-bucket-09-09 --key tmp/dt --range 124bytes
/usr/local/aws/lib/python2.7/site-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
{
    "AcceptRanges": "bytes", 
    "ContentType": "binary/octet-stream", 
    "LastModified": "Tue, 09 Oct 2018 20:05:13 GMT", 
    "ContentLength": 5, 
    "ETag": "\"a9564ebc3289b7a14551baf8ad5ec60a\"", 
    "Metadata": {}
}
{code}
 

> Implement HeadObject REST endpoint
> --
>
> Key: HDDS-517
> URL: https://issues.apache.org/jira/browse/HDDS-517
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-517.000.patch, HDDS-517.001.patch
>
>
> The HEAD operation retrieves metadata from an object without returning the 
> object itself. This operation is useful if you are interested only in an 
> object's metadata. To use HEAD, you must have READ access to the object.
> Steps:
>  1. Look up the volume
>  2. Read the key and return to the user.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
> We have a simple version of this call in HDDS-444 but without Range support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644061#comment-16644061
 ] 

Hanisha Koneru commented on HDDS-600:
-

[~nmaheshwari],

The uri pattern for o3 should be {{o3://./}}

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (HDDS-605) TestOzoneConfigurationFields fails on Trunk

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644045#comment-16644045
 ] 

Anu Engineer commented on HDDS-605:
---

thanks, I had not pulled the trunk. Thanks for resolving this.

> TestOzoneConfigurationFields fails on Trunk
> ---
>
> Key: HDDS-605
> URL: https://issues.apache.org/jira/browse/HDDS-605
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Major
>  Labels: newbie
>
> HDDS-354 removed the following keys from code. 
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
> {{We need to remove the same from ozone-default.xml. Lines 1108 - 1129 needs 
> to be removed for this test to pass.}}
>  
> {{cc: [~hanishakoneru], [~bharatviswa]}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644043#comment-16644043
 ] 

Anu Engineer commented on HDDS-577:
---

+1, I will commit this shortly, I have filed a Jira for the test failure. There 
is a white space issue that I will fix while committing.

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-09 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-577:
-

Assignee: Bharat Viswanadham  (was: Anu Engineer)

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-605) TestOzoneConfigurationFields fails on Trunk

2018-10-09 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644041#comment-16644041
 ] 

Bharat Viswanadham commented on HDDS-605:
-

Hi [~anu]

This is fixed as part of HDDS-599.

> TestOzoneConfigurationFields fails on Trunk
> ---
>
> Key: HDDS-605
> URL: https://issues.apache.org/jira/browse/HDDS-605
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Major
>  Labels: newbie
>
> HDDS-354 removed the following keys from code. 
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
> {{We need to remove the same from ozone-default.xml. Lines 1108 - 1129 needs 
> to be removed for this test to pass.}}
>  
> {{cc: [~hanishakoneru], [~bharatviswa]}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-605) TestOzoneConfigurationFields fails on Trunk

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-605.
-
Resolution: Duplicate

> TestOzoneConfigurationFields fails on Trunk
> ---
>
> Key: HDDS-605
> URL: https://issues.apache.org/jira/browse/HDDS-605
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Major
>  Labels: newbie
>
> HDDS-354 removed the following keys from code. 
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
> {{We need to remove the same from ozone-default.xml. Lines 1108 - 1129 needs 
> to be removed for this test to pass.}}
>  
> {{cc: [~hanishakoneru], [~bharatviswa]}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-605) TestOzoneConfigurationFields fails on Trunk

2018-10-09 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-605:
-

 Summary: TestOzoneConfigurationFields fails on Trunk
 Key: HDDS-605
 URL: https://issues.apache.org/jira/browse/HDDS-605
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Anu Engineer


HDDS-354 removed the following keys from code. 
 * {{"hdds.lock.suppress.warning.interval.ms";}}
 * {{"hdds.lock.suppress.warning.interval.ms";}}

{{We need to remove the same from ozone-default.xml. Lines 1108 - 1129 needs to 
be removed for this test to pass.}}

 

{{cc: [~hanishakoneru], [~bharatviswa]}}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-585) Handle common request identifiers in a transparent way

2018-10-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644038#comment-16644038
 ] 

Hudson commented on HDDS-585:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15155/])
HDDS-585. Handle common request identifiers in a transparent way. (bharat: rev 
d5dd6f31fc35b890cfa241d5fce404d6774e98c6)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/exception/TestOS3Exception.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/HeadBucket.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/EndpointBase.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/bucket/TestHeadBucket.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/bucket/TestDeleteBucket.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/CommonHeadersContainerResponseFilter.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/DeleteBucket.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/RequestIdentifier.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java


> Handle common request identifiers in a transparent way
> --
>
> Key: HDDS-585
> URL: https://issues.apache.org/jira/browse/HDDS-585
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-585.001.patch, HDDS-585.003.patch, 
> HDDS-585.004.patch
>
>
> As of now multiple endpoint contains the code to handle the amazon specific 
> request id-s.
> {code}
> setRequestId(OzoneUtils.getRequestID());
> ...
> return Response.ok().status(HttpStatus.SC_NO_CONTENT).header(
> "x-amz-request-id", getRequestId()).header("x-amz-id-2",
> RandomStringUtils.randomAlphanumeric(8, 16)).build();
> {code}
> I propose to handle the request id generation + adding it to the headers in 
> one location which is transparent for all the rest endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-11396:

Affects Version/s: (was: 3.0.0-alpha4)
   Status: Patch Available  (was: Open)

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: John Zhuge
>Priority: Minor
> Attachments: HDFS-11396-01.patch, 
> patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-11396:

Attachment: HDFS-11396-01.patch

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Priority: Minor
> Attachments: HDFS-11396-01.patch, 
> patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >