[jira] [Commented] (HDDS-653) TestMetadataStore#testIterator fails on Windows

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648760#comment-16648760
 ] 

Hadoop QA commented on HDDS-653:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-653 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943745/HDDS-653.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2c399d1192d5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 28ca5c9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1395/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1395/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestMetadataStore#testIterator fails on Windows
> ---
>
> Key: HDDS-653
> URL: https://issues.apache.org/jira/browse/HDDS-653
> 

[jira] [Updated] (HDDS-653) TestMetadataStore#testIterator fails on Windows

2018-10-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-653:
---
Summary: TestMetadataStore#testIterator fails on Windows  (was: 
TestMetadataStore#testIterator fails in Windows)

> TestMetadataStore#testIterator fails on Windows
> ---
>
> Key: HDDS-653
> URL: https://issues.apache.org/jira/browse/HDDS-653
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-653.001.patch
>
>
> Running the unit tests for hdds-common module, found one failure UT in 
> Windows.
> {noformat}
> java.io.IOException: Unable to delete file: 
> target\test\data\KbmK7CPN1M\MANIFEST-02
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
>   at 
> org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
> {noformat}
> Looking into this, we forget to close the db store and this will lead failure 
> of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-653) TestMetadataStore#testIterator fails in Windows

2018-10-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648744#comment-16648744
 ] 

Yiqun Lin edited comment on HDDS-653 at 10/13/18 4:46 AM:
--

Attach the patch for adding close operation.
I have verified this test in my local. It can pass after this change.
{noformat}
[INFO] Running org.apache.hadoop.utils.TestMetadataStore
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 
3.77 s - in org.apache.hadoop.utils.TestMetadataStore
{noformat}


was (Author: linyiqun):
Attach the patch for adding close operation.
I have verified this test in my local. It can pass after this change.
{noformat}
[INFO] Running org.apache.hadoop.utils.TestMetadataStore
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 
{noformat}

> TestMetadataStore#testIterator fails in Windows
> ---
>
> Key: HDDS-653
> URL: https://issues.apache.org/jira/browse/HDDS-653
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-653.001.patch
>
>
> Running the unit tests for hdds-common module, found one failure UT in 
> Windows.
> {noformat}
> java.io.IOException: Unable to delete file: 
> target\test\data\KbmK7CPN1M\MANIFEST-02
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
>   at 
> org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
> {noformat}
> Looking into this, we forget to close the db store and this will lead failure 
> of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-653) TestMetadataStore#testIterator fails in Windows

2018-10-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648744#comment-16648744
 ] 

Yiqun Lin edited comment on HDDS-653 at 10/13/18 4:46 AM:
--

Attach the patch for adding close operation.
I have verified this test in my local. It can pass after this change.
{noformat}
[INFO] Running org.apache.hadoop.utils.TestMetadataStore
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 
{noformat}


was (Author: linyiqun):
Attach the patch for adding close operation.
I have verified this test in my local. It can pass after this change.


> TestMetadataStore#testIterator fails in Windows
> ---
>
> Key: HDDS-653
> URL: https://issues.apache.org/jira/browse/HDDS-653
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-653.001.patch
>
>
> Running the unit tests for hdds-common module, found one failure UT in 
> Windows.
> {noformat}
> java.io.IOException: Unable to delete file: 
> target\test\data\KbmK7CPN1M\MANIFEST-02
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
>   at 
> org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
> {noformat}
> Looking into this, we forget to close the db store and this will lead failure 
> of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13989) RBF: Add FSCK to the Router

2018-10-12 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648747#comment-16648747
 ] 

Fei Hui commented on HDFS-13989:


[~elgoiri] Thanks. Got it

> RBF: Add FSCK to the Router
> ---
>
> Key: HDFS-13989
> URL: https://issues.apache.org/jira/browse/HDFS-13989
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13989.001.patch
>
>
> The namenode supports FSCK.
> The Router should be able to forward FSCK to the right Namenode and aggregate 
> the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-653) TestMetadataStore#testIterator fails in Windows

2018-10-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-653:
---
Status: Patch Available  (was: Open)

Attach the patch for adding close operation.
I have verified this test in my local. It can pass after this change.


> TestMetadataStore#testIterator fails in Windows
> ---
>
> Key: HDDS-653
> URL: https://issues.apache.org/jira/browse/HDDS-653
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-653.001.patch
>
>
> Running the unit tests for hdds-common module, found one failure UT in 
> Windows.
> {noformat}
> java.io.IOException: Unable to delete file: 
> target\test\data\KbmK7CPN1M\MANIFEST-02
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
>   at 
> org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
> {noformat}
> Looking into this, we forget to close the db store and this will lead failure 
> of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-653) TestMetadataStore#testIterator fails in Windows

2018-10-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-653:
---
Attachment: HDDS-653.001.patch

> TestMetadataStore#testIterator fails in Windows
> ---
>
> Key: HDDS-653
> URL: https://issues.apache.org/jira/browse/HDDS-653
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-653.001.patch
>
>
> Running the unit tests for hdds-common module, found one failure UT in 
> Windows.
> {noformat}
> java.io.IOException: Unable to delete file: 
> target\test\data\KbmK7CPN1M\MANIFEST-02
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
>   at 
> org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
> {noformat}
> Looking into this, we forget to close the db store and this will lead failure 
> of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-653) TestMetadataStore#testIterator fails in Windows

2018-10-12 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDDS-653:
--

 Summary: TestMetadataStore#testIterator fails in Windows
 Key: HDDS-653
 URL: https://issues.apache.org/jira/browse/HDDS-653
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.2.1
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Running the unit tests for hdds-common module, found one failure UT in Windows.
{noformat}
java.io.IOException: Unable to delete file: 
target\test\data\KbmK7CPN1M\MANIFEST-02
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
at 
org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
{noformat}

Looking into this, we forget to close the db store and this will lead failure 
of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648740#comment-16648740
 ] 

Hadoop QA commented on HDDS-518:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-ozone/s3gateway: The patch generated 4 
new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943741/HDDS-518.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9bc64d53c3e8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 28ca5c9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1394/artifact/out/diff-checkstyle-hadoop-ozone_s3gateway.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1394/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1394/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread chencan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648714#comment-16648714
 ] 

chencan commented on HDDS-518:
--

Thanks for your review [~bharatviswa], I have removed the volume name form the 
path param and using the new API's in v2 patch.

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch, HDDS-518.002.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-518:
-
Attachment: HDDS-518.002.patch

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch, HDDS-518.002.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-518:
-
Status: Patch Available  (was: Open)

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch, HDDS-518.002.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-518:
-
Status: Open  (was: Patch Available)

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-518:
-
Attachment: (was: hdds-518.002.patch)

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-518:
-
Attachment: hdds-518.002.patch

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-615) ozone-dist should depend on hadoop-ozone-file-system

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648650#comment-16648650
 ] 

Bharat Viswanadham edited comment on HDDS-615 at 10/13/18 12:46 AM:


[~elek]

But I still see Jenkins job hit the same error. 


was (Author: bharatviswa):
[~elek]

But I still see Jenkins job hit the same error. 

> ozone-dist should depend on hadoop-ozone-file-system
> 
>
> Key: HDDS-615
> URL: https://issues.apache.org/jira/browse/HDDS-615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-615.001.patch
>
>
> In the Yetus build of HDDS-523 the build of the dist project was failed:
> {code:java}
> Mon Oct  8 14:16:06 UTC 2018
> cd /testptch/hadoop/hadoop-ozone/dist
> /usr/bin/mvn -Phdds 
> -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
> -DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dcheckstyle.skip=true -Dfindbugs.skip=true
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Apache Hadoop Ozone Distribution 0.3.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-ozone-dist 
> ---
> [INFO] Deleting /testptch/hadoop/hadoop-ozone/dist (includes = 
> [dependency-reduced-pom.xml], excludes = [])
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-ozone-dist 
> ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: /testptch/hadoop/hadoop-ozone/dist/target/test-dir
> [INFO] Executed tasks
> [INFO] 
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
> hadoop-ozone-dist ---
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist ---
> cp: cannot stat 
> '/testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar':
>  No such file or directory
> Current directory /testptch/hadoop/hadoop-ozone/dist/target
> $ rm -rf ozone-0.3.0-SNAPSHOT
> $ mkdir ozone-0.3.0-SNAPSHOT
> $ cd ozone-0.3.0-SNAPSHOT
> $ cp -p /testptch/hadoop/LICENSE.txt .
> $ cp -p /testptch/hadoop/NOTICE.txt .
> $ cp -p /testptch/hadoop/README.txt .
> $ mkdir -p ./share/hadoop/mapreduce
> $ mkdir -p ./share/hadoop/ozone
> $ mkdir -p ./share/hadoop/hdds
> $ mkdir -p ./share/hadoop/yarn
> $ mkdir -p ./share/hadoop/hdfs
> $ mkdir -p ./share/hadoop/common
> $ mkdir -p ./share/ozone/web
> $ mkdir -p ./bin
> $ mkdir -p ./sbin
> $ mkdir -p ./etc
> $ mkdir -p ./libexec
> $ cp -r /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/conf 
> etc/hadoop
> $ cp 
> /testptch/hadoop/hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties 
> etc/hadoop
> $ cp /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
> bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
> bin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
>  libexec/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone-config.sh 
> libexec/
> $ cp -r /testptch/hadoop/hadoop-ozone/common/src/main/shellprofile.d libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
>  sbin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/workers.sh 
> sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/start-ozone.sh sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/stop-ozone.sh sbin/
> $ mkdir -p ./share/hadoop/ozonefs
> $ cp 
> /testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
>  ./share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Failed!
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 7.832 s
> [INFO] Finished at: 2018-10-08T14:16:16+00:00
> [INFO] Final Memory: 33M/625M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (dist) on project hadoop-ozone-dist: Command execution 

[jira] [Commented] (HDDS-615) ozone-dist should depend on hadoop-ozone-file-system

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648650#comment-16648650
 ] 

Bharat Viswanadham commented on HDDS-615:
-

[~elek]

But I still see Jenkins job hit the same error. 

> ozone-dist should depend on hadoop-ozone-file-system
> 
>
> Key: HDDS-615
> URL: https://issues.apache.org/jira/browse/HDDS-615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-615.001.patch
>
>
> In the Yetus build of HDDS-523 the build of the dist project was failed:
> {code:java}
> Mon Oct  8 14:16:06 UTC 2018
> cd /testptch/hadoop/hadoop-ozone/dist
> /usr/bin/mvn -Phdds 
> -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
> -DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dcheckstyle.skip=true -Dfindbugs.skip=true
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Apache Hadoop Ozone Distribution 0.3.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-ozone-dist 
> ---
> [INFO] Deleting /testptch/hadoop/hadoop-ozone/dist (includes = 
> [dependency-reduced-pom.xml], excludes = [])
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-ozone-dist 
> ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: /testptch/hadoop/hadoop-ozone/dist/target/test-dir
> [INFO] Executed tasks
> [INFO] 
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
> hadoop-ozone-dist ---
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist ---
> cp: cannot stat 
> '/testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar':
>  No such file or directory
> Current directory /testptch/hadoop/hadoop-ozone/dist/target
> $ rm -rf ozone-0.3.0-SNAPSHOT
> $ mkdir ozone-0.3.0-SNAPSHOT
> $ cd ozone-0.3.0-SNAPSHOT
> $ cp -p /testptch/hadoop/LICENSE.txt .
> $ cp -p /testptch/hadoop/NOTICE.txt .
> $ cp -p /testptch/hadoop/README.txt .
> $ mkdir -p ./share/hadoop/mapreduce
> $ mkdir -p ./share/hadoop/ozone
> $ mkdir -p ./share/hadoop/hdds
> $ mkdir -p ./share/hadoop/yarn
> $ mkdir -p ./share/hadoop/hdfs
> $ mkdir -p ./share/hadoop/common
> $ mkdir -p ./share/ozone/web
> $ mkdir -p ./bin
> $ mkdir -p ./sbin
> $ mkdir -p ./etc
> $ mkdir -p ./libexec
> $ cp -r /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/conf 
> etc/hadoop
> $ cp 
> /testptch/hadoop/hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties 
> etc/hadoop
> $ cp /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
> bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
> bin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
>  libexec/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone-config.sh 
> libexec/
> $ cp -r /testptch/hadoop/hadoop-ozone/common/src/main/shellprofile.d libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
>  sbin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/workers.sh 
> sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/start-ozone.sh sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/stop-ozone.sh sbin/
> $ mkdir -p ./share/hadoop/ozonefs
> $ cp 
> /testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
>  ./share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Failed!
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 7.832 s
> [INFO] Finished at: 2018-10-08T14:16:16+00:00
> [INFO] Final Memory: 33M/625M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (dist) on project hadoop-ozone-dist: Command execution failed. Process exited 
> with an error: 1 (Exit value: 1) -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, 

[jira] [Work started] (HDDS-563) Support hybrid VirtualHost style URL

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-563 started by Bharat Viswanadham.
---
> Support hybrid VirtualHost style URL
> 
>
> Key: HDDS-563
> URL: https://issues.apache.org/jira/browse/HDDS-563
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> "I found that we need to support an url scheme where the volume comes from 
> the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
> ([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a 
> unit tests (not sure, but it seems) requires this schema."
> So hybrid means that the volume is identified based on the host name but 
> bucket name comes from url postfix.
> This Jira is created from [~elek] comments on HDDS-525 jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648632#comment-16648632
 ] 

Hudson commented on HDDS-613:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15202 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15202/])
HDDS-613. Update HeadBucket, DeleteBucket to not to have volume in path. 
(bharat: rev 28ca5c9d1647837a1b2480d8935deffc6f68d807)
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/HeadBucket.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/bucket/TestHeadBucket.java
* (edit) hadoop-ozone/dist/src/main/smoketest/commonlib.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/bucketv2.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/bucketv4.robot
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/DeleteBucket.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/bucket/TestDeleteBucket.java


> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-12 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648620#comment-16648620
 ] 

Xiaoyu Yao commented on HDDS-637:
-

Thanks [~nmaheshwari] for reporting the issue. Do you restart the DN while the 
MR result file is being written? 
I tried a few times but can't repro the issue. 

By reviewing the code and the log message, it seems this can happen when the 
pipeline creation and the closing executed (stale DN handler) by different 
threads can have a race, leading to the invalidate state transition. This will 
be improved with HDDS-587 and follow up JIRAs. 

> Not able to access the part-r-0 file after the MR job succeeds
> --
>
> Key: HDDS-637
> URL: https://issues.apache.org/jira/browse/HDDS-637
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Run a MR job
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
> 18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539295307098_0003
> 18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539295307098_0003
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
> application_1539295307098_0003
> 18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
> 18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
> 18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
> uber mode : false
> 18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
> 18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
> successfully
> 18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
> File System Counters
> FILE: Number of bytes read=6332
> FILE: Number of bytes written=532585
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=215876
> HDFS: Number of bytes written=0
> HDFS: Number of read operations=2
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=0
> O3: Number of bytes read=0
> O3: Number of bytes written=0
> O3: Number of read operations=0
> O3: Number of large read operations=0
> O3: Number of write operations=0
> Job Counters
> Launched map tasks=1
> Launched reduce tasks=1
> Rack-local map tasks=1
> Total time spent by all maps in occupied slots (ms)=25392
> Total time spent by all reduces in occupied slots (ms)=103584
> Total time spent by all map tasks (ms)=6348
> Total time spent by all reduce tasks (ms)=12948
> Total vcore-milliseconds taken by all map tasks=6348
> Total vcore-milliseconds taken by all reduce tasks=12948
> Total megabyte-milliseconds taken by all map tasks=26001408
> Total megabyte-milliseconds taken by all reduce tasks=106070016
> Map-Reduce Framework
> Map input records=716
> Map output records=32019
> Map output bytes=343475
> Map output materialized bytes=6332
> Input split bytes=121
> Combine input records=32019
> Combine output records=461
> Reduce input groups=461
> Reduce shuffle bytes=6332
> Reduce input 

[jira] [Updated] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-613:

  Resolution: Fixed
Target Version/s: 0.3.0, 0.4.0
  Status: Resolved  (was: Patch Available)

Thank You [~anu] for review.

I have committed this to trunk and ozone-0.3

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-613:

Fix Version/s: 0.4.0
   0.3.0

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-613:

Target Version/s: 0.3.0  (was: 0.3.0, 0.4.0)

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648601#comment-16648601
 ] 

Anu Engineer commented on HDDS-650:
---

Moving the stack trace to comment.
{code:java}
-bash-4.2$ spark-shell --master yarn-client --jars 
/usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539295307098_0011).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey 
at :25

scala> count.cache()
res0: count.type = ShuffledRDD[4] at reduceByKey at :25

scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
[Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, executor 
1): java.io.IOException: Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: ozone.om.address must be 
defined. See https://wiki.apache.org/hadoop/Ozone#Configuration for details on 
configuring Ozone.
at org.apache.hadoop.ozone.OmUtils.getOmAddressForClients(OmUtils.java:70)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 

[jira] [Updated] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-650:
--
Description: 
Spark job is not able to pick up Ozone configuration.
Tried copying ozone-site.xml to /etc/spark2/conf directory as well. It does not 
work.

This however works fine, when ozone.om.address and ozone.scm.client.address are 
specified in core-site.xml

 

  was:
Spark job is not able to pick up Ozone configuration.
{code:java}
 {code}
Tried copying ozone-site.xml to /etc/spark2/conf directory as well. It does not 
work.

This however works fine, when ozone.om.address and ozone.scm.client.address are 
specified in core-site.xml

 


> Spark job is not able to pick up Ozone configuration
> 
>
> Key: HDDS-650
> URL: https://issues.apache.org/jira/browse/HDDS-650
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> Spark job is not able to pick up Ozone configuration.
> Tried copying ozone-site.xml to /etc/spark2/conf directory as well. It does 
> not work.
> This however works fine, when ozone.om.address and ozone.scm.client.address 
> are specified in core-site.xml
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-650:
--
Description: 
Spark job is not able to pick up Ozone configuration.
{code:java}
 {code}
Tried copying ozone-site.xml to /etc/spark2/conf directory as well. It does not 
work.

This however works fine, when ozone.om.address and ozone.scm.client.address are 
specified in core-site.xml

 

  was:
Spark job is not able to pick up Ozone configuration.
{code:java}
-bash-4.2$ spark-shell --master yarn-client --jars 
/usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539295307098_0011).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey 
at :25

scala> count.cache()
res0: count.type = ShuffledRDD[4] at reduceByKey at :25

scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
[Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, executor 
1): java.io.IOException: Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: ozone.om.address must be 
defined. See https://wiki.apache.org/hadoop/Ozone#Configuration for details on 
configuring Ozone.
at org.apache.hadoop.ozone.OmUtils.getOmAddressForClients(OmUtils.java:70)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:114)
at 

[jira] [Commented] (HDDS-445) Create a logger to print out all of the incoming requests

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648598#comment-16648598
 ] 

Hudson commented on HDDS-445:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15201/])
HDDS-445. Create a logger to print out all of the incoming requests. 
(aengineer: rev 3c1fe073d2fef76676660144e7dce2050761ae64)
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


> Create a logger to print out all of the incoming requests
> -
>
> Key: HDDS-445
> URL: https://issues.apache.org/jira/browse/HDDS-445
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-445.00.patch, HDDS-445.01.patch, HDDS-445.02.patch
>
>
> For the Http servier of HDDS-444 we need an option to print out all the 
> HttpRequests (header + body).
> To create a 100% s3 compatible interface, we need to test it with multiple 
> external tools (such as s3cli). While mitmproxy is always our best friend, to 
> make it more easier to identify the problems we need a method to log all the 
> incoming requests with a logger which could be turned on.
> Most probably we already have such kind of filter in hadoop/jetty the only 
> thing what we need is to configure it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648593#comment-16648593
 ] 

Arpit Agarwal edited comment on HDDS-651 at 10/12/18 11:46 PM:
---

This is a good idea [~nmaheshwari].

Making it a blocker for 0.3.0, as it would be good get the change done sooner 
rather than later.


was (Author: arpitagarwal):
This is a good idea [~nmaheshwari].

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Namit Maheshwari
>Priority: Blocker
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-651:
---
Priority: Blocker  (was: Major)

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Namit Maheshwari
>Priority: Blocker
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-651:
---
Issue Type: Improvement  (was: Task)

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648593#comment-16648593
 ] 

Arpit Agarwal commented on HDDS-651:


This is a good idea [~nmaheshwari].

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-445) Create a logger to print out all of the incoming requests

2018-10-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-445:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for the reviews. [~bharatviswa] Thanks for the 
contribution. I have committed this patch to the trunk and ozone-0.3 branch.

> Create a logger to print out all of the incoming requests
> -
>
> Key: HDDS-445
> URL: https://issues.apache.org/jira/browse/HDDS-445
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-445.00.patch, HDDS-445.01.patch, HDDS-445.02.patch
>
>
> For the Http servier of HDDS-444 we need an option to print out all the 
> HttpRequests (header + body).
> To create a 100% s3 compatible interface, we need to test it with multiple 
> external tools (such as s3cli). While mitmproxy is always our best friend, to 
> make it more easier to identify the problems we need a method to log all the 
> incoming requests with a logger which could be turned on.
> Most probably we already have such kind of filter in hadoop/jetty the only 
> thing what we need is to configure it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648578#comment-16648578
 ] 

Hadoop QA commented on HDDS-613:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-613 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943724/HDDS-613.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 00aef6a94982 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HDDS-652) Properties in ozone-site.xml does not work well with IP

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-652:
-

 Summary: Properties in ozone-site.xml does not work well with IP 
 Key: HDDS-652
 URL: https://issues.apache.org/jira/browse/HDDS-652
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


There have been cases where properties in ozone-site.xml does not work well 
with IP. 

If those properties like ozone.om.address is changed to use hostnames, it works 
well.

 

Ideally this should work fine with both IP and hostnames.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-652) Properties in ozone-site.xml does not work well with IP

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-652:
--
Target Version/s: 0.3.0
   Fix Version/s: 0.3.0

> Properties in ozone-site.xml does not work well with IP 
> 
>
> Key: HDDS-652
> URL: https://issues.apache.org/jira/browse/HDDS-652
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> There have been cases where properties in ozone-site.xml does not work well 
> with IP. 
> If those properties like ozone.om.address is changed to use hostnames, it 
> works well.
>  
> Ideally this should work fine with both IP and hostnames.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-445) Create a logger to print out all of the incoming requests

2018-10-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648576#comment-16648576
 ] 

Anu Engineer commented on HDDS-445:
---

+1, I will commit this shortly.

> Create a logger to print out all of the incoming requests
> -
>
> Key: HDDS-445
> URL: https://issues.apache.org/jira/browse/HDDS-445
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-445.00.patch, HDDS-445.01.patch, HDDS-445.02.patch
>
>
> For the Http servier of HDDS-444 we need an option to print out all the 
> HttpRequests (header + body).
> To create a 100% s3 compatible interface, we need to test it with multiple 
> external tools (such as s3cli). While mitmproxy is always our best friend, to 
> make it more easier to identify the problems we need a method to log all the 
> incoming requests with a logger which could be turned on.
> Most probably we already have such kind of filter in hadoop/jetty the only 
> thing what we need is to configure it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648570#comment-16648570
 ] 

Anu Engineer commented on HDDS-613:
---

+1, pending Jenkins. For now, let us go with the approach where volumes are 
removed from code, it is less confusing for the new contributors.

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-615) ozone-dist should depend on hadoop-ozone-file-system

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648567#comment-16648567
 ] 

Hadoop QA commented on HDDS-615:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
52m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
50s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-615 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943200/HDDS-615.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 7902d171a89e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ddc9649 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1392/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1392/testReport/ |
| Max. process+thread count | 365 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1392/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ozone-dist should depend on hadoop-ozone-file-system
> 
>
> Key: HDDS-615
> URL: https://issues.apache.org/jira/browse/HDDS-615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> 

[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-651:
--
Target Version/s: 0.3.0

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-651:
-

 Summary: Rename o3 to o3fs for Filesystem
 Key: HDDS-651
 URL: https://issues.apache.org/jira/browse/HDDS-651
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


I propose that we rename o3 to o3fs for Filesystem.

It creates a lot of confusion while using the same name o3 for different 
purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-651:
--
Fix Version/s: 0.3.0

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-650:
--
Target Version/s: 0.3.0

> Spark job is not able to pick up Ozone configuration
> 
>
> Key: HDDS-650
> URL: https://issues.apache.org/jira/browse/HDDS-650
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> Spark job is not able to pick up Ozone configuration.
> {code:java}
> -bash-4.2$ spark-shell --master yarn-client --jars 
> /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> Spark context Web UI available at 
> http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
> Spark context available as 'sc' (master = yarn, app id = 
> application_1539295307098_0011).
> Spark session available as 'spark'.
> Welcome to
>  __
> / __/__ ___ _/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
> /_/
> Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala>
> scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
> input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
> MapPartitionsRDD[1] at textFile at :24
> scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
> 1)).reduceByKey(_+_);
> count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at 
> reduceByKey at :25
> scala> count.cache()
> res0: count.type = ShuffledRDD[4] at reduceByKey at :25
> scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
> [Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 
> in stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, 
> executor 1): java.io.IOException: Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
> at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
> at org.apache.spark.scheduler.Task.run(Task.scala:109)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 

[jira] [Updated] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-650:
--
Fix Version/s: 0.3.0

> Spark job is not able to pick up Ozone configuration
> 
>
> Key: HDDS-650
> URL: https://issues.apache.org/jira/browse/HDDS-650
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> Spark job is not able to pick up Ozone configuration.
> {code:java}
> -bash-4.2$ spark-shell --master yarn-client --jars 
> /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> Spark context Web UI available at 
> http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
> Spark context available as 'sc' (master = yarn, app id = 
> application_1539295307098_0011).
> Spark session available as 'spark'.
> Welcome to
>  __
> / __/__ ___ _/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
> /_/
> Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala>
> scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
> input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
> MapPartitionsRDD[1] at textFile at :24
> scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
> 1)).reduceByKey(_+_);
> count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at 
> reduceByKey at :25
> scala> count.cache()
> res0: count.type = ShuffledRDD[4] at reduceByKey at :25
> scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
> [Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 
> in stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, 
> executor 1): java.io.IOException: Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
> at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
> at org.apache.spark.scheduler.Task.run(Task.scala:109)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 

[jira] [Created] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-650:
-

 Summary: Spark job is not able to pick up Ozone configuration
 Key: HDDS-650
 URL: https://issues.apache.org/jira/browse/HDDS-650
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Spark job is not able to pick up Ozone configuration.
{code:java}
-bash-4.2$ spark-shell --master yarn-client --jars 
/usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539295307098_0011).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey 
at :25

scala> count.cache()
res0: count.type = ShuffledRDD[4] at reduceByKey at :25

scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
[Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, executor 
1): java.io.IOException: Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: ozone.om.address must be 
defined. See https://wiki.apache.org/hadoop/Ozone#Configuration for details on 
configuring Ozone.
at org.apache.hadoop.ozone.OmUtils.getOmAddressForClients(OmUtils.java:70)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 

[jira] [Created] (HDDS-649) Parallel test execution is broken

2018-10-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-649:
--

 Summary: Parallel test execution is broken
 Key: HDDS-649
 URL: https://issues.apache.org/jira/browse/HDDS-649
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Arpit Agarwal


Parallel tests (with mvn test -Pparallel-tests) give unpredictable results 
likely because surefire is parallelizing test cases within a class.

Looks like surefire has options to parallelize at the class-level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648547#comment-16648547
 ] 

Bharat Viswanadham commented on HDDS-613:
-

Ran the acceptance test with s3Gateway and aws endpoint.

 
{code:java}
HW13865:smoketest bviswanadham$ ./test.sh --env ozones3 s3
Waiting 30s for cluster start up...
==
S3                                                                            
==
S3.Awscli :: S3 gateway test with aws cli                                     
==
Create volume and bucket for the tests                                | PASS |
--
Install aws s3 cli                                                    | PASS |
--
File upload and directory list                                        | PASS |
--
S3.Awscli :: S3 gateway test with aws cli                             | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
S3.Bucketv2 :: S3 gateway test with aws cli for bucket operations             
==
Setup s3 Tests                                                        | PASS |
--
Create Bucket                                                         | PASS |
--
Head Bucket                                                           | PASS |
--
Delete Bucket                                                         | PASS |
--
S3.Bucketv2 :: S3 gateway test with aws cli for bucket operations     | PASS |
4 critical tests, 4 passed, 0 failed
4 tests total, 4 passed, 0 failed
==
S3.Bucketv4 :: S3 gateway test with aws cli for bucket operations             
==
Setup s3 Tests                                                        | PASS |
--
Create Bucket                                                         | PASS |
--
Head Bucket                                                           | PASS |
--
Delete Bucket                                                         | PASS |
--
S3.Bucketv4 :: S3 gateway test with aws cli for bucket operations     | PASS |
4 critical tests, 4 passed, 0 failed
4 tests total, 4 passed, 0 failed
==
S3                                                                    | PASS |
11 critical tests, 11 passed, 0 failed
11 tests total, 11 passed, 0 failed
==
Output:  /opt/hadoop/output.xml
Log:     /opt/hadoop/log.html
Report:  /opt/hadoop/report.html
Stopping ozones3_ozoneManager_1 ... done
Stopping ozones3_s3g_1          ... done
Stopping ozones3_datanode_1     ... done
Stopping ozones3_scm_1          ... done
Removing ozones3_ozoneManager_1 ... done
Removing ozones3_s3g_1          ... done
Removing ozones3_datanode_1     ... done
Removing ozones3_scm_1          ... done
Removing network ozones3_default
HW13865:smoketest bviswanadham$ robot -v 
ENDPOINT_URL:https://s3.us-east-1.amazonaws.com -v OZONE_TEST:false -v 
BUCKET:bh-9098 -v NONEXIST-BUCKET:bh-909 s3/bucketv4.robot
==
Bucketv4 :: S3 gateway test with aws cli for bucket operations                
==
Setup s3 Tests                                                        | PASS |
--
Create Bucket                                                         | PASS |
--
Head Bucket                        

[jira] [Commented] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648546#comment-16648546
 ] 

Hadoop QA commented on HDDS-646:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestKeyDeletingService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-646 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943719/HDDS-646.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 38ff9b15967d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5c8e023 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1391/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1391/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1391/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Work started] (HDDS-516) Implement CopyObject REST endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-516 started by Bharat Viswanadham.
---
> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-516) Implement CopyObject REST endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-516:

Status: Open  (was: Patch Available)

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648543#comment-16648543
 ] 

Bharat Viswanadham edited comment on HDDS-518 at 10/12/18 10:41 PM:


Hi [~candychencan] 

Thank You for working on this.

Few comments:
 # We can remove volume name from path param.
 # We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);{code}
 


was (Author: bharatviswa):
Hi [~candychencan] 
 # We can remove volume name from path param.
 # We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);{code}
 

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648543#comment-16648543
 ] 

Bharat Viswanadham edited comment on HDDS-518 at 10/12/18 10:40 PM:


Hi [~candychencan] 
 # We can remove volume name from path param.
 # We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);{code}
 
 {code}


was (Author: bharatviswa):
Hi [~candychencan] 
# We can remove volume name from path param.
# We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);\{code}
 
 

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648543#comment-16648543
 ] 

Bharat Viswanadham edited comment on HDDS-518 at 10/12/18 10:40 PM:


Hi [~candychencan] 
 # We can remove volume name from path param.
 # We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);{code}
 


was (Author: bharatviswa):
Hi [~candychencan] 
 # We can remove volume name from path param.
 # We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);{code}
 
 {code}

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-518) Implement PutObject Rest endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648543#comment-16648543
 ] 

Bharat Viswanadham commented on HDDS-518:
-

Hi [~candychencan] 
# We can remove volume name from path param.
# We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.
 
{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);\{code}
 
 

> Implement PutObject Rest endpoint
> -
>
> Key: HDDS-518
> URL: https://issues.apache.org/jira/browse/HDDS-518
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-518.001.patch
>
>
> The Put Object call allows users to add an Object to an S3 bucket.
> The aws reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
> We have an initial implementation in HDDS-444, but we need add the 
> configurable chunk size (what we have in the upload from 'ozone sh' command), 
> and support replication, replication type parameters.
> We also need to support Content-MD5 header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-587) Add new classes for pipeline management

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648540#comment-16648540
 ] 

Hudson commented on HDDS-587:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15200/])
HDDS-587. Add new classes for pipeline management. Contributed by Lokesh 
(nanda: rev 5c8e023ba32da3e65193f6ced354efe830dba75d)
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineFactory.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineID.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/package-info.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineProvider.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSimplePipelineProvider.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineStateManager.java


> Add new classes for pipeline management
> ---
>
> Key: HDDS-587
> URL: https://issues.apache.org/jira/browse/HDDS-587
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: recovery
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-587.001.patch, HDDS-587.002.patch, 
> HDDS-587.003.patch
>
>
> This Jira adds new classes and corresponding unit tests for pipeline 
> management in SCM. The old classes will be removed in a subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648541#comment-16648541
 ] 

Hudson commented on HDDS-646:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15200/])
HDDS-646. TestChunkStreams.testErrorReadGroupInputStream fails. (arp: rev 
ddc964932817b4c4e4f4dc848dae764d5285e875)
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestChunkStreams.java


> TestChunkStreams.testErrorReadGroupInputStream fails
> 
>
> Key: HDDS-646
> URL: https://issues.apache.org/jira/browse/HDDS-646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-646.000.patch
>
>
> After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648539#comment-16648539
 ] 

Hudson commented on HDDS-645:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15200/])
HDDS-645. Enable OzoneFS contract tests by default. Contributed by Arpit 
(aengineer: rev c07b95bdfcd410aa82acdb1fed28e84981ff06f9)
* (edit) hadoop-ozone/ozonefs/pom.xml


> Enable OzoneFS contract tests by default
> 
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-644) Rename dfs.container.ratis.num.container.op.threads

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648538#comment-16648538
 ] 

Hudson commented on HDDS-644:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15200/])
HDDS-644. Rename dfs.container.ratis.num.container.op.threads. (arp: rev 
3a684a2b23517df996a8c30731d9b5cb3282cc2a)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java


> Rename dfs.container.ratis.num.container.op.threads
> ---
>
> Key: HDDS-644
> URL: https://issues.apache.org/jira/browse/HDDS-644
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-644.00.patch
>
>
> public static final String DFS_CONTAINER_RATIS_NUM_CONTAINER_OP_EXECUTORS_KEY
>  = "dfs.container.ratis.num.container.op.threads";
> This should be changed to dfs.container.ratis.num.container.op.executors
>  
> HDDS-550 has added this in OzoneConfigKeys.java, but they have named 
> differently in ozone-default.xml and ScmConfigKeys.java
>  
> Because of this TestOzoneConfigurationFields.java is failing
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1378/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-523) Implement DeleteObject REST endpoint

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648535#comment-16648535
 ] 

Bharat Viswanadham commented on HDDS-523:
-

Hi [~elek]
 # We can remove volume name from path param, now HDDS-577 and HDDS-522 and 
HDDS-606 are checked in.
 # We can change OzoneBucket bucket = getBucket(volumeName, bucketName); as 
below using new API's.

{code:java}
OzoneBucket bucket = 
getVolume(getOzoneVolumeName(bucketName)).getBucket(bucketName);
bucket.deleteKey(keyPath);{code}

> Implement DeleteObject REST endpoint
> 
>
> Key: HDDS-523
> URL: https://issues.apache.org/jira/browse/HDDS-523
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-523.001.patch, HDDS-523.002.patch, 
> HDDS-523.003.patch
>
>
> Simple delete Object call.
> Implemented by HDDS-444 without the acceptance tests.
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648534#comment-16648534
 ] 

Hadoop QA commented on HDDS-645:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m  4s{color} 
| {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.ozone.contract.ITestOzoneContractCreate |
|   | hadoop.fs.ozone.contract.ITestOzoneContractMkdir |
|   | hadoop.fs.ozone.contract.ITestOzoneContractDelete |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-645 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943717/HDDS-645.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 2407feb52b96 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 02e1ef5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1390/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1390/artifact/out/patch-unit-hadoop-ozone_ozonefs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1390/testReport/ |
| Max. process+thread count | 3164 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1390/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.




[jira] [Updated] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-613:

Status: Patch Available  (was: In Progress)

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-613:

Attachment: HDDS-613.00.patch

> Update  HeadBucket, DeleteBucket to not to have volume in path
> --
>
> Key: HDDS-613
> URL: https://issues.apache.org/jira/browse/HDDS-613
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-613.00.patch
>
>
> Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-615) ozone-dist should depend on hadoop-ozone-file-system

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648526#comment-16648526
 ] 

Bharat Viswanadham commented on HDDS-615:
-

+1 LGTM, pending jenkins.

I have re-triggered the Jenkins run, hadoop-dist also followed the similar 
aproach.

https://builds.apache.org/job/PreCommit-HDDS-Build/1392/console

> ozone-dist should depend on hadoop-ozone-file-system
> 
>
> Key: HDDS-615
> URL: https://issues.apache.org/jira/browse/HDDS-615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-615.001.patch
>
>
> In the Yetus build of HDDS-523 the build of the dist project was failed:
> {code:java}
> Mon Oct  8 14:16:06 UTC 2018
> cd /testptch/hadoop/hadoop-ozone/dist
> /usr/bin/mvn -Phdds 
> -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
> -DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dcheckstyle.skip=true -Dfindbugs.skip=true
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Apache Hadoop Ozone Distribution 0.3.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-ozone-dist 
> ---
> [INFO] Deleting /testptch/hadoop/hadoop-ozone/dist (includes = 
> [dependency-reduced-pom.xml], excludes = [])
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-ozone-dist 
> ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: /testptch/hadoop/hadoop-ozone/dist/target/test-dir
> [INFO] Executed tasks
> [INFO] 
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
> hadoop-ozone-dist ---
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist ---
> cp: cannot stat 
> '/testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar':
>  No such file or directory
> Current directory /testptch/hadoop/hadoop-ozone/dist/target
> $ rm -rf ozone-0.3.0-SNAPSHOT
> $ mkdir ozone-0.3.0-SNAPSHOT
> $ cd ozone-0.3.0-SNAPSHOT
> $ cp -p /testptch/hadoop/LICENSE.txt .
> $ cp -p /testptch/hadoop/NOTICE.txt .
> $ cp -p /testptch/hadoop/README.txt .
> $ mkdir -p ./share/hadoop/mapreduce
> $ mkdir -p ./share/hadoop/ozone
> $ mkdir -p ./share/hadoop/hdds
> $ mkdir -p ./share/hadoop/yarn
> $ mkdir -p ./share/hadoop/hdfs
> $ mkdir -p ./share/hadoop/common
> $ mkdir -p ./share/ozone/web
> $ mkdir -p ./bin
> $ mkdir -p ./sbin
> $ mkdir -p ./etc
> $ mkdir -p ./libexec
> $ cp -r /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/conf 
> etc/hadoop
> $ cp 
> /testptch/hadoop/hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties 
> etc/hadoop
> $ cp /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
> bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
> bin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
>  libexec/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone-config.sh 
> libexec/
> $ cp -r /testptch/hadoop/hadoop-ozone/common/src/main/shellprofile.d libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
>  sbin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/workers.sh 
> sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/start-ozone.sh sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/stop-ozone.sh sbin/
> $ mkdir -p ./share/hadoop/ozonefs
> $ cp 
> /testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
>  ./share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Failed!
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 7.832 s
> [INFO] Finished at: 2018-10-08T14:16:16+00:00
> [INFO] Final Memory: 33M/625M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (dist) on project hadoop-ozone-dist: Command execution failed. Process 

[jira] [Updated] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-646:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks for the quick fix [~nandakumar131]!

> TestChunkStreams.testErrorReadGroupInputStream fails
> 
>
> Key: HDDS-646
> URL: https://issues.apache.org/jira/browse/HDDS-646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-646.000.patch
>
>
> After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648521#comment-16648521
 ] 

Arpit Agarwal commented on HDDS-646:


+1

Verified that the UT passes with this patch and fails without. I'm going to 
commit this without waiting for Jenkins since it's a UT-only fix.

> TestChunkStreams.testErrorReadGroupInputStream fails
> 
>
> Key: HDDS-646
> URL: https://issues.apache.org/jira/browse/HDDS-646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-646.000.patch
>
>
> After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-648) hadoop-hdds and its sub modules have undefined hadoop component

2018-10-12 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-648:
--

 Summary: hadoop-hdds and its sub modules have undefined hadoop 
component
 Key: HDDS-648
 URL: https://issues.apache.org/jira/browse/HDDS-648
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Similar to HDDS-409, hadoop-hdds and its submodule have undefined hadoop 
component folder:

When building the package, it creates an UNDEF hadoop component in the share 
folder:
 * 
./hadoop-hdds/sub-module/target/sub-module-X.Y.Z-SNAPSHOT/share/hadoop/UNDEF/lib



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648517#comment-16648517
 ] 

Hudson commented on HDDS-624:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15199 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15199/])
HDDS-624.PutBlock fails with Unexpected Storage Container Exception. 
(aengineer: rev 02e1ef5e0779369f1363df2bb0437945fe9a271c)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/BlockUtils.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestRocksDBStoreMBean.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetadataStoreBuilder.java


> PutBlock fails with Unexpected Storage Container Exception
> --
>
> Key: HDDS-624
> URL: https://issues.apache.org/jira/browse/HDDS-624
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-624.01.patch
>
>
> As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
> issue in RocksDBStore. To avoid that failure set the property 
> _ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml
> Now running Mapreduce job fails with below error
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
> 18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539208750583_0005
> 18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539208750583_0005
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
> application_1539208750583_0005
> 18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
> 18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
> 18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
> uber mode : false
> 18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
> attempt_1539208750583_0005_r_00_0, Status : FAILED
> Error: java.io.IOException: Unexpected Storage Container Exception: 
> java.io.IOException: Failed to command cmdType: PutBlock
> traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
> containerID: 2
> datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
> putBlock {
> blockData {
> blockID {
> containerID: 2
> localID: 100874119214399488
> }
> metadata {
> key: "TYPE"
> value: "KEY"
> }
> chunks {
> chunkName: 
> "f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
> offset: 0
> len: 5017
> }
> }
> }
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
> at 
> 

[jira] [Commented] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648511#comment-16648511
 ] 

Jitendra Nath Pandey commented on HDDS-645:
---

This was really important!

> Enable OzoneFS contract tests by default
> 
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-587) Add new classes for pipeline management

2018-10-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-587:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

> Add new classes for pipeline management
> ---
>
> Key: HDDS-587
> URL: https://issues.apache.org/jira/browse/HDDS-587
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: recovery
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-587.001.patch, HDDS-587.002.patch, 
> HDDS-587.003.patch
>
>
> This Jira adds new classes and corresponding unit tests for pipeline 
> management in SCM. The old classes will be removed in a subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648508#comment-16648508
 ] 

Arpit Agarwal commented on HDDS-645:


Thanks [~anu]!

> Enable OzoneFS contract tests by default
> 
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-587) Add new classes for pipeline management

2018-10-12 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648506#comment-16648506
 ] 

Nanda kumar commented on HDDS-587:
--

[~ljain], thanks for the contribution and thanks to [~anu] for review. I have 
committed this to trunk and ozone-0.3 branch.

> Add new classes for pipeline management
> ---
>
> Key: HDDS-587
> URL: https://issues.apache.org/jira/browse/HDDS-587
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: recovery
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-587.001.patch, HDDS-587.002.patch, 
> HDDS-587.003.patch
>
>
> This Jira adds new classes and corresponding unit tests for pipeline 
> management in SCM. The old classes will be removed in a subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13991) Review of DiskBalancerCluster.java

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648502#comment-16648502
 ] 

Hadoop QA commented on HDFS-13991:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13991 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943699/HDFS-13991.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f90327c38fbe 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85ccab7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDDS-616) Collect al the robot test output and return with the right exit code

2018-10-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648500#comment-16648500
 ] 

Anu Engineer commented on HDDS-616:
---

Local tests are fully functional now, please let me know if you need me to 
commit this, thx

> Collect al the robot test output and return with the right exit code
> 
>
> Key: HDDS-616
> URL: https://issues.apache.org/jira/browse/HDDS-616
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-616.001.patch
>
>
> In the current acceptance test runner bash script 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) the output of the test 
> executions are overridden by the following test executions.
> An other problem is the exit code is always 0. In case of a failing test the 
> exit code should be non-zero at the end of the execution.
> The easiest way to fix these issues is using the rebot tool from robot 
> framework distribution. rebot is similar to the robot but instead of 
> executing tests it just render the html report from previous test output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-645:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for the contribution. I have committed this to the trunk 
and ozone-0.30 branch. I have also verified that tests run when mvn test is run.

> Enable OzoneFS contract tests by default
> 
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-644) Rename dfs.container.ratis.num.container.op.threads

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-644:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

+1 thanks [~bharatviswa].

> Rename dfs.container.ratis.num.container.op.threads
> ---
>
> Key: HDDS-644
> URL: https://issues.apache.org/jira/browse/HDDS-644
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-644.00.patch
>
>
> public static final String DFS_CONTAINER_RATIS_NUM_CONTAINER_OP_EXECUTORS_KEY
>  = "dfs.container.ratis.num.container.op.threads";
> This should be changed to dfs.container.ratis.num.container.op.executors
>  
> HDDS-550 has added this in OzoneConfigKeys.java, but they have named 
> differently in ozone-default.xml and ScmConfigKeys.java
>  
> Because of this TestOzoneConfigurationFields.java is failing
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1378/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-647) TestOzoneConfigurationFields is failing for dfs.container.ratis.num.container.op.executors

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-647:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Sorry [~nandakumar131]. Dup of HDDS-644.

> TestOzoneConfigurationFields is failing for 
> dfs.container.ratis.num.container.op.executors
> --
>
> Key: HDDS-647
> URL: https://issues.apache.org/jira/browse/HDDS-647
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-647.000.patch
>
>
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.ozone.TestOzoneConfigurationFields)
>   Time elapsed: 0.155 s  <<< FAILURE!
> java.lang.AssertionError: ozone-default.xml has 1 properties missing in  
> class org.apache.hadoop.ozone.OzoneConfigKeys  class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys  class 
> org.apache.hadoop.hdds.HddsConfigKeys  class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries:   
> dfs.container.ratis.num.container.op.executors expected:<0> but was:<1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-647) TestOzoneConfigurationFields is failing for dfs.container.ratis.num.container.op.executors

2018-10-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-647:
-
Status: Patch Available  (was: Open)

> TestOzoneConfigurationFields is failing for 
> dfs.container.ratis.num.container.op.executors
> --
>
> Key: HDDS-647
> URL: https://issues.apache.org/jira/browse/HDDS-647
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-647.000.patch
>
>
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.ozone.TestOzoneConfigurationFields)
>   Time elapsed: 0.155 s  <<< FAILURE!
> java.lang.AssertionError: ozone-default.xml has 1 properties missing in  
> class org.apache.hadoop.ozone.OzoneConfigKeys  class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys  class 
> org.apache.hadoop.hdds.HddsConfigKeys  class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries:   
> dfs.container.ratis.num.container.op.executors expected:<0> but was:<1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-647) TestOzoneConfigurationFields is failing for dfs.container.ratis.num.container.op.executors

2018-10-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-647:
-
Attachment: HDDS-647.000.patch

> TestOzoneConfigurationFields is failing for 
> dfs.container.ratis.num.container.op.executors
> --
>
> Key: HDDS-647
> URL: https://issues.apache.org/jira/browse/HDDS-647
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-647.000.patch
>
>
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.ozone.TestOzoneConfigurationFields)
>   Time elapsed: 0.155 s  <<< FAILURE!
> java.lang.AssertionError: ozone-default.xml has 1 properties missing in  
> class org.apache.hadoop.ozone.OzoneConfigKeys  class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys  class 
> org.apache.hadoop.hdds.HddsConfigKeys  class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries:   
> dfs.container.ratis.num.container.op.executors expected:<0> but was:<1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-647) TestOzoneConfigurationFields is failing for dfs.container.ratis.num.container.op.executors

2018-10-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-647:


 Summary: TestOzoneConfigurationFields is failing for 
dfs.container.ratis.num.container.op.executors
 Key: HDDS-647
 URL: https://issues.apache.org/jira/browse/HDDS-647
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nanda kumar
Assignee: Nanda kumar


testCompareXmlAgainstConfigurationClass(org.apache.hadoop.ozone.TestOzoneConfigurationFields)
  Time elapsed: 0.155 s  <<< FAILURE!
java.lang.AssertionError: ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  class 
org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys  class 
org.apache.hadoop.hdds.HddsConfigKeys  class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries:   
dfs.container.ratis.num.container.op.executors expected:<0> but was:<1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-10-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648484#comment-16648484
 ] 

Konstantin Shvachko commented on HDFS-13566:


# {{MiniDFSCluster}} should use *_auxiliary_* ports rather than *_additional_* 
ones.
# Check for white-space changes like in {{NameNode}} line 1708
# I recommend going over all comments and revising them for better clarity
 ** E.g. {{hdfs-default.xml}} the decryption should say
{code:java}

A comma separated list of auxiliary ports for the NameNode to listen on.
This allows exposing multiple NN addresses to clients.
Particularly, it is used to enforce different SASL levels on different ports.
Empty list indicates that auxiliary ports are disabled.
{code}
# I don't think we need {{dfs.namenode.rpc-address.use.auxiliary-port}}. Empty 
list should indicate that such ports are disabled.

> Add configurable additional RPC listener to NameNode
> 
>
> Key: HDFS-13566
> URL: https://issues.apache.org/jira/browse/HDFS-13566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13566.001.patch, HDFS-13566.002.patch, 
> HDFS-13566.003.patch, HDFS-13566.004.patch, HDFS-13566.005.patch, 
> HDFS-13566.006.patch
>
>
> This Jira aims to add the capability to NameNode to run additional 
> listener(s). Such that NameNode can be accessed from multiple ports. 
> Fundamentally, this Jira tries to extend ipc.Server to allow configured with 
> more listeners, binding to different ports, but sharing the same call queue 
> and the handlers. Useful when different clients are only allowed to access 
> certain different ports. Combined with HDFS-13547, this also allows different 
> ports to have different SASL security levels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648481#comment-16648481
 ] 

Arpit Agarwal commented on HDDS-624:


bq. we should shade the RockDBs files. 
I am not sure shading RocksDB files will fix this issue. The problem is with 
the _org.apache.hadoop.metrics2.util.MBeans_ class.

> PutBlock fails with Unexpected Storage Container Exception
> --
>
> Key: HDDS-624
> URL: https://issues.apache.org/jira/browse/HDDS-624
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-624.01.patch
>
>
> As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
> issue in RocksDBStore. To avoid that failure set the property 
> _ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml
> Now running Mapreduce job fails with below error
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
> 18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539208750583_0005
> 18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539208750583_0005
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
> application_1539208750583_0005
> 18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
> 18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
> 18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
> uber mode : false
> 18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
> attempt_1539208750583_0005_r_00_0, Status : FAILED
> Error: java.io.IOException: Unexpected Storage Container Exception: 
> java.io.IOException: Failed to command cmdType: PutBlock
> traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
> containerID: 2
> datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
> putBlock {
> blockData {
> blockID {
> containerID: 2
> localID: 100874119214399488
> }
> metadata {
> key: "TYPE"
> value: "KEY"
> }
> chunks {
> chunkName: 
> "f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
> offset: 0
> len: 5017
> }
> }
> }
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:630)
> at 

[jira] [Commented] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648476#comment-16648476
 ] 

Arpit Agarwal commented on HDDS-624:


Thank you [~anu].

> PutBlock fails with Unexpected Storage Container Exception
> --
>
> Key: HDDS-624
> URL: https://issues.apache.org/jira/browse/HDDS-624
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-624.01.patch
>
>
> As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
> issue in RocksDBStore. To avoid that failure set the property 
> _ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml
> Now running Mapreduce job fails with below error
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
> 18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539208750583_0005
> 18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539208750583_0005
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
> application_1539208750583_0005
> 18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
> 18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
> 18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
> uber mode : false
> 18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
> attempt_1539208750583_0005_r_00_0, Status : FAILED
> Error: java.io.IOException: Unexpected Storage Container Exception: 
> java.io.IOException: Failed to command cmdType: PutBlock
> traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
> containerID: 2
> datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
> putBlock {
> blockData {
> blockID {
> containerID: 2
> localID: 100874119214399488
> }
> metadata {
> key: "TYPE"
> value: "KEY"
> }
> chunks {
> chunkName: 
> "f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
> offset: 0
> len: 5017
> }
> }
> }
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:630)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at 

[jira] [Commented] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648475#comment-16648475
 ] 

Anu Engineer commented on HDDS-645:
---

+1, I will commit this shortly.

> Enable OzoneFS contract tests by default
> 
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-624:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~bharatviswa], [~elek], [~xyao] Thanks for reviews and comments. 
[~arpitagarwal] Thanks for the contribution. I have committed this change to 
trunk and ozone-0.3 branches.

> PutBlock fails with Unexpected Storage Container Exception
> --
>
> Key: HDDS-624
> URL: https://issues.apache.org/jira/browse/HDDS-624
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-624.01.patch
>
>
> As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
> issue in RocksDBStore. To avoid that failure set the property 
> _ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml
> Now running Mapreduce job fails with below error
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
> 18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539208750583_0005
> 18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539208750583_0005
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
> application_1539208750583_0005
> 18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
> 18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
> 18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
> uber mode : false
> 18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
> attempt_1539208750583_0005_r_00_0, Status : FAILED
> Error: java.io.IOException: Unexpected Storage Container Exception: 
> java.io.IOException: Failed to command cmdType: PutBlock
> traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
> containerID: 2
> datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
> putBlock {
> blockData {
> blockID {
> containerID: 2
> localID: 100874119214399488
> }
> metadata {
> key: "TYPE"
> value: "KEY"
> }
> chunks {
> chunkName: 
> "f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
> offset: 0
> len: 5017
> }
> }
> }
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
> at 

[jira] [Updated] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-646:
-
Attachment: HDDS-646.000.patch

> TestChunkStreams.testErrorReadGroupInputStream fails
> 
>
> Key: HDDS-646
> URL: https://issues.apache.org/jira/browse/HDDS-646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-646.000.patch
>
>
> After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-646:
-
Status: Patch Available  (was: Open)

> TestChunkStreams.testErrorReadGroupInputStream fails
> 
>
> Key: HDDS-646
> URL: https://issues.apache.org/jira/browse/HDDS-646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-646.000.patch
>
>
> After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-606) Create delete s3Bucket

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648466#comment-16648466
 ] 

Hudson commented on HDDS-606:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15198 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15198/])
HDDS-606. Create delete s3Bucket. Contributed by Bharat Viswanadham. (bharat: 
rev 8ae8a5004f8fbe2638a3e27ab8efbe0e2e27cb8c)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestS3BucketManager.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/EndpointBase.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java


> Create delete s3Bucket
> --
>
> Key: HDDS-606
> URL: https://issues.apache.org/jira/browse/HDDS-606
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-606.00.patch, HDDS-606.01.patch, HDDS-606.02.patch, 
> HDDS-606.03.patch
>
>
> We should have a new API to delete buckets created via S3.
> As this delete should actually delete the bucket from bucket table and also 
> mapping for S3Table in ozone manager.
> This Jira shall have:
>  # OM changes
>  # Rpc Client and proto changes
>  # EndPointBase changes to add the new S3Bucket API's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-639) ChunkGroupInputStream gets into infinite loop after reading a block

2018-10-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648465#comment-16648465
 ] 

Hadoop QA commented on HDDS-639:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
10s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-639 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943694/HDDS-639.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 943606bc987b 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85ccab7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1389/testReport/ |
| Max. process+thread count | 2102 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/client 

[jira] [Created] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-646:


 Summary: TestChunkStreams.testErrorReadGroupInputStream fails
 Key: HDDS-646
 URL: https://issues.apache.org/jira/browse/HDDS-646
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648454#comment-16648454
 ] 

Anu Engineer commented on HDDS-624:
---

+1, I will commit this shortly. Irrespective of this Jira , I agree with 
[~xyao] and [~elek], we should shade the RockDBs files. That will avoid issues 
when it gets deployed in the field.

> PutBlock fails with Unexpected Storage Container Exception
> --
>
> Key: HDDS-624
> URL: https://issues.apache.org/jira/browse/HDDS-624
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-624.01.patch
>
>
> As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
> issue in RocksDBStore. To avoid that failure set the property 
> _ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml
> Now running Mapreduce job fails with below error
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
> 18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539208750583_0005
> 18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539208750583_0005
> 18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
> 18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
> application_1539208750583_0005
> 18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
> 18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
> 18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
> uber mode : false
> 18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
> attempt_1539208750583_0005_r_00_0, Status : FAILED
> Error: java.io.IOException: Unexpected Storage Container Exception: 
> java.io.IOException: Failed to command cmdType: PutBlock
> traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
> containerID: 2
> datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
> putBlock {
> blockData {
> blockID {
> containerID: 2
> localID: 100874119214399488
> }
> metadata {
> key: "TYPE"
> value: "KEY"
> }
> chunks {
> chunkName: 
> "f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
> offset: 0
> len: 5017
> }
> }
> }
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:630)
> at 

[jira] [Updated] (HDDS-645) Enable OzoneFS contract tests by default

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-645:
---
Summary: Enable OzoneFS contract tests by default  (was: Enable OzoneFS 
contract test by default)

> Enable OzoneFS contract tests by default
> 
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-645) Enable OzoneFS contract test by default

2018-10-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648449#comment-16648449
 ] 

Arpit Agarwal commented on HDDS-645:


v01:
- Add surefire inclusion. The test class names do not follow one of the 
[standard naming 
conventions|https://maven.apache.org/surefire/maven-surefire-plugin/examples/inclusion-exclusion.html]
 so they must be included explicitly.

> Enable OzoneFS contract test by default
> ---
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-645) Enable OzoneFS contract test by default

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-645:
---
Status: Patch Available  (was: Open)

> Enable OzoneFS contract test by default
> ---
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-645) Enable OzoneFS contract test by default

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-645:
---
Attachment: HDDS-645.01.patch

> Enable OzoneFS contract test by default
> ---
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-645.01.patch
>
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-645) Enable OzoneFS contract test by default

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-645:
--

Assignee: Arpit Agarwal

> Enable OzoneFS contract test by default
> ---
>
> Key: HDDS-645
> URL: https://issues.apache.org/jira/browse/HDDS-645
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> [~msingh] pointed out that OzoneFS contract tests are not running by default 
> and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-606) Create delete s3Bucket

2018-10-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648440#comment-16648440
 ] 

Bharat Viswanadham commented on HDDS-606:
-

Thank You [~jnp] for review.

I have committed this to trunk and ozone-0.3.

> Create delete s3Bucket
> --
>
> Key: HDDS-606
> URL: https://issues.apache.org/jira/browse/HDDS-606
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-606.00.patch, HDDS-606.01.patch, HDDS-606.02.patch, 
> HDDS-606.03.patch
>
>
> We should have a new API to delete buckets created via S3.
> As this delete should actually delete the bucket from bucket table and also 
> mapping for S3Table in ozone manager.
> This Jira shall have:
>  # OM changes
>  # Rpc Client and proto changes
>  # EndPointBase changes to add the new S3Bucket API's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-606) Create delete s3Bucket

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-606:

Fix Version/s: 0.4.0
   0.3.0

> Create delete s3Bucket
> --
>
> Key: HDDS-606
> URL: https://issues.apache.org/jira/browse/HDDS-606
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-606.00.patch, HDDS-606.01.patch, HDDS-606.02.patch, 
> HDDS-606.03.patch
>
>
> We should have a new API to delete buckets created via S3.
> As this delete should actually delete the bucket from bucket table and also 
> mapping for S3Table in ozone manager.
> This Jira shall have:
>  # OM changes
>  # Rpc Client and proto changes
>  # EndPointBase changes to add the new S3Bucket API's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-606) Create delete s3Bucket

2018-10-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-606:

  Resolution: Fixed
Target Version/s: 0.3.0, 0.4.0  (was: 0.3.0)
  Status: Resolved  (was: Patch Available)

> Create delete s3Bucket
> --
>
> Key: HDDS-606
> URL: https://issues.apache.org/jira/browse/HDDS-606
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-606.00.patch, HDDS-606.01.patch, HDDS-606.02.patch, 
> HDDS-606.03.patch
>
>
> We should have a new API to delete buckets created via S3.
> As this delete should actually delete the bucket from bucket table and also 
> mapping for S3Table in ozone manager.
> This Jira shall have:
>  # OM changes
>  # Rpc Client and proto changes
>  # EndPointBase changes to add the new S3Bucket API's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-639) ChunkGroupInputStream gets into infinite loop after reading a block

2018-10-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648438#comment-16648438
 ] 

Hudson commented on HDDS-639:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15197 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15197/])
HDDS-639. ChunkGroupInputStream gets into infinite loop after reading a (arp: 
rev 56b18b9df14b91c02cf3b7f548ab58755475b374)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java


> ChunkGroupInputStream gets into infinite loop after reading a block
> ---
>
> Key: HDDS-639
> URL: https://issues.apache.org/jira/browse/HDDS-639
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-639.000.patch, HDDS-639.002.patch, 
> HDDS-639.003.patch
>
>
> {{ChunkGroupInputStream}} doesn't exit the while loop even after reading all 
> the chunks of the corresponding block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-606) Create delete s3Bucket

2018-10-12 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648436#comment-16648436
 ] 

Jitendra Nath Pandey commented on HDDS-606:
---

+1, LGTM

> Create delete s3Bucket
> --
>
> Key: HDDS-606
> URL: https://issues.apache.org/jira/browse/HDDS-606
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-606.00.patch, HDDS-606.01.patch, HDDS-606.02.patch, 
> HDDS-606.03.patch
>
>
> We should have a new API to delete buckets created via S3.
> As this delete should actually delete the bucket from bucket table and also 
> mapping for S3Table in ozone manager.
> This Jira shall have:
>  # OM changes
>  # Rpc Client and proto changes
>  # EndPointBase changes to add the new S3Bucket API's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-622) Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError

2018-10-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-622:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError
> -
>
> Key: HDDS-622
> URL: https://issues.apache.org/jira/browse/HDDS-622
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Critical
>
> Datanodes are registered fine on a Hadoop + Ozone cluster.
> While running jobs against Ozone, Datanode shuts down as below:
> {code:java}
> 2018-10-10 21:50:42,708 INFO storage.RaftLogWorker 
> (RaftLogWorker.java:rollLogSegment(263)) - Rolling 
> segment:7c1a32b5-34ed-4a2a-aa07-ac75d25858b6-RaftLogWorker index to:2
> 2018-10-10 21:50:42,714 INFO impl.RaftServerImpl 
> (ServerState.java:setRaftConf(319)) - 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: 
> set configuration 2: [7c1a32b5-34ed-4a2a-aa07-ac75d25858b6:172.27.56.9:9858, 
> ee
> 20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858, 
> b7fbd501-27ae-4304-8c42-a612915094c6:172.27.17.133:9858], old=null at 2
> 2018-10-10 21:50:42,729 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
> e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
> ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
> 2018-10-10 21:50:43,245 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
> e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
> ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
> 2018-10-10 21:50:43,310 ERROR impl.RaftServerImpl 
> (RaftServerImpl.java:applyLogToStateMachine(1153)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: applyTransaction failed for index:1 
> proto:(t:2, i:1)SMLOGENTRY,,
> client-894EC0846FDF, cid=0
> 2018-10-10 21:50:43,313 ERROR impl.StateMachineUpdater 
> (ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
> StateMachineUpdater-7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: the 
> StateMachineUpdater hii
> ts Throwable
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.metrics2.util.MBeans.register(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;Ljava/lang/Object;)Ljavax/management/ObjectName;
> at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:74)
> at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:142)
> at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:78)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:133)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:256)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:179)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:223)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:229)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.access$300(ContainerStateMachine.java:115)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.handleCreateContainer(ContainerStateMachine.java:618)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.executeContainerCommand(ContainerStateMachine.java:642)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:396)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1150)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
> at java.lang.Thread.run(Thread.java:748)
> 2018-10-10 21:50:43,320 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DataNode at 
> ctr-e138-1518143905142-510793-01-02.hwx.site/172.27.56.9
> /
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >