[jira] [Created] (HDDS-1834) ozone fs -mkdir -p does not create parent directories

2019-07-19 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1834:
---

 Summary: ozone fs -mkdir -p does not create parent directories
 Key: HDDS-1834
 URL: https://issues.apache.org/jira/browse/HDDS-1834
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Doroszlai, Attila


ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir -p}} 
only creates key for the specific directory, not its parents.

{noformat}
ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
{noformat}

Previous result:

{noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
$ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r '.[].keyName'
testdir/
testdir/deep/
{noformat}

Current result:

{noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
$ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r '.[].keyName'
testdir/deep/
{noformat}

The failure happens on first operation that tries to use {{testdir/}} directly:

{noformat}
$ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
ls: `o3fs://bucket1.fstest/testdir': No such file or directory
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1834) ozone fs -mkdir -p does not create parent directories

2019-07-19 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888718#comment-16888718
 ] 

Doroszlai, Attila commented on HDDS-1834:
-

Hi [~ljain], can you please check?  This seems to be caused by HDDS-1481.

> ozone fs -mkdir -p does not create parent directories
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Priority: Major
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1834) ozone fs -mkdir -p does not create parent directories

2019-07-19 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1649#comment-1649
 ] 

Doroszlai, Attila commented on HDDS-1834:
-

Thanks [~ljain] for checking.  Seems like it's specific to secure cluster (or 
{{ozonesecure}} smoke test environment).  Running the same {{ozonefs.robot}} 
test in {{ozone}} works fine.

> ozone fs -mkdir -p does not create parent directories
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Lokesh Jain
>Priority: Major
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1835) Improve metric name for CSM Metrics

2019-07-19 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved HDDS-1835.
-
Resolution: Duplicate

> Improve metric name for CSM Metrics
> ---
>
> Key: HDDS-1835
> URL: https://issues.apache.org/jira/browse/HDDS-1835
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> CSMMetrics currently uses the fully qualified class name as the metric name. 
> This should be shortened.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1834) ozone fs -mkdir -p does not create parent directories in ozonesecure

2019-07-19 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1834:

Summary: ozone fs -mkdir -p does not create parent directories in 
ozonesecure  (was: ozone fs -mkdir -p does not create parent directories)

> ozone fs -mkdir -p does not create parent directories in ozonesecure
> 
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Lokesh Jain
>Priority: Major
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-22 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890702#comment-16890702
 ] 

Doroszlai, Attila commented on HDDS-1803:
-

Thank you [~anu] for reviewing and committing it.

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1811) Prometheus metrics are broken for datanodes due to an invalid metric

2019-07-22 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890701#comment-16890701
 ] 

Doroszlai, Attila commented on HDDS-1811:
-

Thanks [~anu] for reviewing and committing it.

> Prometheus metrics are broken for datanodes due to an invalid metric
> 
>
> Key: HDDS-1811
> URL: https://issues.apache.org/jira/browse/HDDS-1811
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Datanodes can't be monitored with prometheus any more:
> {code}
> level=warn ts=2019-07-16T16:29:55.876Z caller=scrape.go:937 component="scrape 
> manager" scrape_pool=pods target=http://192.168.69.76:9882/prom msg="append 
> failed" err="invalid metric type 
> \"apache.hadoop.ozone.container.common.transport.server.ratis._csm_metrics_delete_container_avg_time
>  gauge\""
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1834) ozone fs -mkdir -p does not create parent directories in ozonesecure

2019-07-25 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1834:
---

Assignee: Doroszlai, Attila

> ozone fs -mkdir -p does not create parent directories in ozonesecure
> 
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1834) ozone fs -mkdir -p does not create parent directories in ozonesecure

2019-07-25 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16892764#comment-16892764
 ] 

Doroszlai, Attila commented on HDDS-1834:
-

[~ljain], correct, ACL check for {{testdir}} fails because it does not exist as 
a "key".  I think ACL check should handle it as "prefix".  I'll give it a try.

> ozone fs -mkdir -p does not create parent directories in ozonesecure
> 
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Priority: Blocker
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-25 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1834:

Summary: parent directories not found in secure setup due to ACL check  
(was: ozone fs -mkdir -p does not create parent directories in ozonesecure)

> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1867) Invalid Prometheus metric name from JvmMetrics

2019-07-26 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1867:
---

 Summary: Invalid Prometheus metric name from JvmMetrics
 Key: HDDS-1867
 URL: https://issues.apache.org/jira/browse/HDDS-1867
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{noformat}
target=http://scm:9876/prom msg="append failed" err="invalid metric type \"_old 
_generation counter\""
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1867) Invalid Prometheus metric name from JvmMetrics

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1867 started by Doroszlai, Attila.
---
> Invalid Prometheus metric name from JvmMetrics
> --
>
> Key: HDDS-1867
> URL: https://issues.apache.org/jira/browse/HDDS-1867
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {noformat}
> target=http://scm:9876/prom msg="append failed" err="invalid metric type 
> \"_old _generation counter\""
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1834:

Status: Patch Available  (was: Open)

> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1867) Invalid Prometheus metric name from JvmMetrics

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1867:

Status: Patch Available  (was: In Progress)

> Invalid Prometheus metric name from JvmMetrics
> --
>
> Key: HDDS-1867
> URL: https://issues.apache.org/jira/browse/HDDS-1867
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat}
> target=http://scm:9876/prom msg="append failed" err="invalid metric type 
> \"_old _generation counter\""
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1852 started by Doroszlai, Attila.
---
> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1852:
---

Assignee: Doroszlai, Attila

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1852) Fix typo in TestOmAcls

2019-07-26 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1852:

Status: Patch Available  (was: In Progress)

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1852) Fix typo in TestOmAcls

2019-07-29 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1852:

Fix Version/s: 0.5.0

> Fix typo in TestOmAcls
> --
>
> Key: HDDS-1852
> URL: https://issues.apache.org/jira/browse/HDDS-1852
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Dinesh Chitlangia
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In test class TestOmAcls.java, correct the typo 
> {code}OzoneAccessAuthrizerTest{code}
> {code:java}
> class OzoneAccessAuthrizerTest implements IAccessAuthorizer {
>   @Override
>   public boolean checkAccess(IOzoneObj ozoneObject, RequestContext context)
>   throws OMException {
> return false;
>   }
> {code}
> Change {code}OzoneAccessAuthrizerTest{code} to 
> {code}OzoneAccessAuthorizerTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1870) ConcurrentModification at PrometheusMetricsSink

2019-07-29 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1870:
---

 Summary: ConcurrentModification at PrometheusMetricsSink
 Key: HDDS-1870
 URL: https://issues.apache.org/jira/browse/HDDS-1870
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Encountered on {{ozoneperf}} compose env when running low on CPU:

{code}
om_1  | java.util.ConcurrentModificationException
om_1  | at 
java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1493)
om_1  | at 
java.base/java.util.HashMap$ValueIterator.next(HashMap.java:1521)
om_1  | at 
org.apache.hadoop.hdds.server.PrometheusMetricsSink.writeMetrics(PrometheusMetricsSink.java:123)
om_1  | at 
org.apache.hadoop.hdds.server.PrometheusServlet.doGet(PrometheusServlet.java:43)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1870) ConcurrentModification at PrometheusMetricsSink

2019-07-29 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1870:

Status: Patch Available  (was: Open)

> ConcurrentModification at PrometheusMetricsSink
> ---
>
> Key: HDDS-1870
> URL: https://issues.apache.org/jira/browse/HDDS-1870
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Encountered on {{ozoneperf}} compose env when running low on CPU:
> {code}
> om_1  | java.util.ConcurrentModificationException
> om_1  |   at 
> java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1493)
> om_1  |   at 
> java.base/java.util.HashMap$ValueIterator.next(HashMap.java:1521)
> om_1  |   at 
> org.apache.hadoop.hdds.server.PrometheusMetricsSink.writeMetrics(PrometheusMetricsSink.java:123)
> om_1  |   at 
> org.apache.hadoop.hdds.server.PrometheusServlet.doGet(PrometheusServlet.java:43)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-30 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896796#comment-16896796
 ] 

Doroszlai, Attila commented on HDDS-1834:
-

Thanks [~xyao] for committing it.  Can you please double-check 
[ozone-0.4.1|https://github.com/apache/hadoop/commits/ozone-0.4.1]?  I don't 
see the commit there.

> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1876) hadoop27 acceptance test cannot be run

2019-07-31 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1876:
---

 Summary: hadoop27 acceptance test cannot be run
 Key: HDDS-1876
 URL: https://issues.apache.org/jira/browse/HDDS-1876
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{noformat:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-gl52x/acceptance/output.log}
Executing test in 
/workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27
The HADOOP_RUNNER_VERSION variable is not set. Defaulting to a blank string.
The HADOOP_IMAGE variable is not set. Defaulting to a blank string.
Removing network hadoop27_default
Network hadoop27_default not found.
The HADOOP_RUNNER_VERSION variable is not set. Defaulting to a blank string.
The HADOOP_IMAGE variable is not set. Defaulting to a blank string.
Creating network "hadoop27_default" with the default driver
no such image: apache/ozone-runner:: invalid reference format
ERROR: Test execution of 
/workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27
 is FAILED
cp: cannot stat 
'/workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27/result/robot-*.xml':
 No such file or directory
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1876) hadoop27 acceptance test cannot be run

2019-07-31 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1876 started by Doroszlai, Attila.
---
> hadoop27 acceptance test cannot be run
> --
>
> Key: HDDS-1876
> URL: https://issues.apache.org/jira/browse/HDDS-1876
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> {noformat:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-gl52x/acceptance/output.log}
> Executing test in 
> /workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27
> The HADOOP_RUNNER_VERSION variable is not set. Defaulting to a blank string.
> The HADOOP_IMAGE variable is not set. Defaulting to a blank string.
> Removing network hadoop27_default
> Network hadoop27_default not found.
> The HADOOP_RUNNER_VERSION variable is not set. Defaulting to a blank string.
> The HADOOP_IMAGE variable is not set. Defaulting to a blank string.
> Creating network "hadoop27_default" with the default driver
> no such image: apache/ozone-runner:: invalid reference format
> ERROR: Test execution of 
> /workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27
>  is FAILED
> cp: cannot stat 
> '/workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27/result/robot-*.xml':
>  No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1877) hadoop31-mapreduce fails due to wrong HADOOP_VERSION

2019-07-31 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1877:
---

 Summary: hadoop31-mapreduce fails due to wrong HADOOP_VERSION
 Key: HDDS-1877
 URL: https://issues.apache.org/jira/browse/HDDS-1877
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.5.0, 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


hadoop31-mapreduce fails with:

{noformat:title=https://elek.github.io/ozone-ci/byscane/byscane-nightly-gl52x/acceptance/smokeresult/log.html#s1-s2-t2-k2-k2}
JAR does not exist or is not a normal file: 
/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar
{noformat}

because 3.1 test is being run with {{HADOOP_VERSION=3}}:

{noformat:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-gl52x/acceptance/output.log}
Creating network "hadoop31_default" with the default driver
Pulling nm (flokkr/hadoop:3)...
3: Pulling from flokkr/hadoop
Digest: sha256:62e3488e64ff8c0406752fc4f263ae2549e04fedf02534469913c496c6a89d78
Status: Downloaded newer image for flokkr/hadoop:3
{noformat}

which has Hadoop 3.2.0 instead of 3.1.2:

{noformat:title=docker run -it --entrypoint /bin/bash flokkr/hadoop:3 -c 'ls 
-la /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples*'}
-rw-r--r--1 hadoop   flokkr  316570 Jan  8  2019 
/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar
{noformat}

{noformat:title=docker run -it --entrypoint /bin/bash flokkr/hadoop:3.1.2 -c 
'ls -la /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples*'}
-rw-r--r--1 hadoop   flokkr  316380 Jan 29  2019 
/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar
{noformat}

This only happens with {{acceptance.sh}}, not when directly using 
{{test-all.sh}}, because the former explicitly defines {{HADOOP_VERSION}}:

{noformat:title=https://github.com/apache/hadoop/blob/d4ab9aea6f9cbcdcaf48b821e5be04b4e952b133/hadoop-ozone/dev-support/checks/acceptance.sh#L19}
export HADOOP_VERSION=3
{noformat}

so the correct value from {{.env}} file is ignored:

{noformat:title=https://github.com/apache/hadoop/blob/d4ab9aea6f9cbcdcaf48b821e5be04b4e952b133/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/.env#L21}
HADOOP_VERSION=3.1.2
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1876) hadoop27 acceptance test cannot be run

2019-07-31 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1876:

Status: Patch Available  (was: In Progress)

> hadoop27 acceptance test cannot be run
> --
>
> Key: HDDS-1876
> URL: https://issues.apache.org/jira/browse/HDDS-1876
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-gl52x/acceptance/output.log}
> Executing test in 
> /workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27
> The HADOOP_RUNNER_VERSION variable is not set. Defaulting to a blank string.
> The HADOOP_IMAGE variable is not set. Defaulting to a blank string.
> Removing network hadoop27_default
> Network hadoop27_default not found.
> The HADOOP_RUNNER_VERSION variable is not set. Defaulting to a blank string.
> The HADOOP_IMAGE variable is not set. Defaulting to a blank string.
> Creating network "hadoop27_default" with the default driver
> no such image: apache/ozone-runner:: invalid reference format
> ERROR: Test execution of 
> /workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27
>  is FAILED
> cp: cannot stat 
> '/workdir/hadoop-ozone/dist/target/ozone-0.4.1-SNAPSHOT/compose/ozone-mr/hadoop27/result/robot-*.xml':
>  No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1877) hadoop31-mapreduce fails due to wrong HADOOP_VERSION

2019-07-31 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1877 started by Doroszlai, Attila.
---
> hadoop31-mapreduce fails due to wrong HADOOP_VERSION
> 
>
> Key: HDDS-1877
> URL: https://issues.apache.org/jira/browse/HDDS-1877
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0, 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> hadoop31-mapreduce fails with:
> {noformat:title=https://elek.github.io/ozone-ci/byscane/byscane-nightly-gl52x/acceptance/smokeresult/log.html#s1-s2-t2-k2-k2}
> JAR does not exist or is not a normal file: 
> /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar
> {noformat}
> because 3.1 test is being run with {{HADOOP_VERSION=3}}:
> {noformat:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-gl52x/acceptance/output.log}
> Creating network "hadoop31_default" with the default driver
> Pulling nm (flokkr/hadoop:3)...
> 3: Pulling from flokkr/hadoop
> Digest: 
> sha256:62e3488e64ff8c0406752fc4f263ae2549e04fedf02534469913c496c6a89d78
> Status: Downloaded newer image for flokkr/hadoop:3
> {noformat}
> which has Hadoop 3.2.0 instead of 3.1.2:
> {noformat:title=docker run -it --entrypoint /bin/bash flokkr/hadoop:3 -c 'ls 
> -la /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples*'}
> -rw-r--r--1 hadoop   flokkr  316570 Jan  8  2019 
> /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar
> {noformat}
> {noformat:title=docker run -it --entrypoint /bin/bash flokkr/hadoop:3.1.2 -c 
> 'ls -la /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples*'}
> -rw-r--r--1 hadoop   flokkr  316380 Jan 29  2019 
> /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar
> {noformat}
> This only happens with {{acceptance.sh}}, not when directly using 
> {{test-all.sh}}, because the former explicitly defines {{HADOOP_VERSION}}:
> {noformat:title=https://github.com/apache/hadoop/blob/d4ab9aea6f9cbcdcaf48b821e5be04b4e952b133/hadoop-ozone/dev-support/checks/acceptance.sh#L19}
> export HADOOP_VERSION=3
> {noformat}
> so the correct value from {{.env}} file is ignored:
> {noformat:title=https://github.com/apache/hadoop/blob/d4ab9aea6f9cbcdcaf48b821e5be04b4e952b133/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/.env#L21}
> HADOOP_VERSION=3.1.2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1877) hadoop31-mapreduce fails due to wrong HADOOP_VERSION

2019-07-31 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1877:

Status: Patch Available  (was: In Progress)

> hadoop31-mapreduce fails due to wrong HADOOP_VERSION
> 
>
> Key: HDDS-1877
> URL: https://issues.apache.org/jira/browse/HDDS-1877
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0, 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop31-mapreduce fails with:
> {noformat:title=https://elek.github.io/ozone-ci/byscane/byscane-nightly-gl52x/acceptance/smokeresult/log.html#s1-s2-t2-k2-k2}
> JAR does not exist or is not a normal file: 
> /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar
> {noformat}
> because 3.1 test is being run with {{HADOOP_VERSION=3}}:
> {noformat:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-gl52x/acceptance/output.log}
> Creating network "hadoop31_default" with the default driver
> Pulling nm (flokkr/hadoop:3)...
> 3: Pulling from flokkr/hadoop
> Digest: 
> sha256:62e3488e64ff8c0406752fc4f263ae2549e04fedf02534469913c496c6a89d78
> Status: Downloaded newer image for flokkr/hadoop:3
> {noformat}
> which has Hadoop 3.2.0 instead of 3.1.2:
> {noformat:title=docker run -it --entrypoint /bin/bash flokkr/hadoop:3 -c 'ls 
> -la /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples*'}
> -rw-r--r--1 hadoop   flokkr  316570 Jan  8  2019 
> /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar
> {noformat}
> {noformat:title=docker run -it --entrypoint /bin/bash flokkr/hadoop:3.1.2 -c 
> 'ls -la /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples*'}
> -rw-r--r--1 hadoop   flokkr  316380 Jan 29  2019 
> /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar
> {noformat}
> This only happens with {{acceptance.sh}}, not when directly using 
> {{test-all.sh}}, because the former explicitly defines {{HADOOP_VERSION}}:
> {noformat:title=https://github.com/apache/hadoop/blob/d4ab9aea6f9cbcdcaf48b821e5be04b4e952b133/hadoop-ozone/dev-support/checks/acceptance.sh#L19}
> export HADOOP_VERSION=3
> {noformat}
> so the correct value from {{.env}} file is ignored:
> {noformat:title=https://github.com/apache/hadoop/blob/d4ab9aea6f9cbcdcaf48b821e5be04b4e952b133/hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/.env#L21}
> HADOOP_VERSION=3.1.2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1878) checkstyle error in ContainerStateMachine

2019-07-31 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1878 started by Doroszlai, Attila.
---
> checkstyle error in ContainerStateMachine
> -
>
> Key: HDDS-1878
> URL: https://issues.apache.org/jira/browse/HDDS-1878
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {noformat:title=https://ci.anzix.net/job/ozone/17488/artifact/build/checkstyle.out}
> [ERROR] 
> src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java:[186]
>  (sizes) LineLength: Line is longer than 80 characters (found 85).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1878) checkstyle error in ContainerStateMachine

2019-07-31 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1878:
---

 Summary: checkstyle error in ContainerStateMachine
 Key: HDDS-1878
 URL: https://issues.apache.org/jira/browse/HDDS-1878
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{noformat:title=https://ci.anzix.net/job/ozone/17488/artifact/build/checkstyle.out}
[ERROR] 
src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java:[186]
 (sizes) LineLength: Line is longer than 80 characters (found 85).
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1878) checkstyle error in ContainerStateMachine

2019-07-31 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1878:

Status: Patch Available  (was: In Progress)

> checkstyle error in ContainerStateMachine
> -
>
> Key: HDDS-1878
> URL: https://issues.apache.org/jira/browse/HDDS-1878
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17488/artifact/build/checkstyle.out}
> [ERROR] 
> src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java:[186]
>  (sizes) LineLength: Line is longer than 80 characters (found 85).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1902) Fix checkstyle issues in ContainerStateMachine

2019-08-03 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899557#comment-16899557
 ] 

Doroszlai, Attila commented on HDDS-1902:
-

Hi [~nandakumar131], I think this is already fixed in HDDS-1878.

> Fix checkstyle issues in ContainerStateMachine
> --
>
> Key: HDDS-1902
> URL: https://issues.apache.org/jira/browse/HDDS-1902
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>
> Fix checkstyle issues in ContainerStateMachine:
> Line is longer than 80 characters (found 85).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1910) Cannot build hadoop-hdds-config from scratch in IDEA

2019-08-05 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1910:
---

 Summary: Cannot build hadoop-hdds-config from scratch in IDEA
 Key: HDDS-1910
 URL: https://issues.apache.org/jira/browse/HDDS-1910
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Building {{hadoop-hdds-config}} from scratch (eg. right after checkout or after 
{{mvn clean}}) in IDEA fails with the following error:

{code}
Error:java: Bad service configuration file, or exception thrown while 
constructing Processor object: javax.annotation.processing.Processor: Provider 
org.apache.hadoop.hdds.conf.ConfigFileGenerator not found
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1910) Cannot build hadoop-hdds-config from scratch in IDEA

2019-08-05 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1910 started by Doroszlai, Attila.
---
> Cannot build hadoop-hdds-config from scratch in IDEA
> 
>
> Key: HDDS-1910
> URL: https://issues.apache.org/jira/browse/HDDS-1910
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> Building {{hadoop-hdds-config}} from scratch (eg. right after checkout or 
> after {{mvn clean}}) in IDEA fails with the following error:
> {code}
> Error:java: Bad service configuration file, or exception thrown while 
> constructing Processor object: javax.annotation.processing.Processor: 
> Provider org.apache.hadoop.hdds.conf.ConfigFileGenerator not found
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-06 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1916:
---

 Summary: Only contract tests are run in ozonefs module
 Key: HDDS-1916
 URL: https://issues.apache.org/jira/browse/HDDS-1916
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{{hadoop-ozone-filesystem}} has 6 test classes that are not being run:

{code}
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
{code}

{code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 s 
- in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 s 
- in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.245 
s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 s 
- in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 s 
- in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 s 
- in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
[INFO] Running 
org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.042 
s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
[WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
[INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 s 
- in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-06 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1916:

Affects Version/s: 0.4.0

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-06 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1916:

Target Version/s: 0.4.1
  Status: Patch Available  (was: Open)

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-06 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1916:

Affects Version/s: (was: 0.4.0)
   0.3.0

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1918:
---

 Summary: hadoop-ozone-tools has integration tests run as unit
 Key: HDDS-1918
 URL: https://issues.apache.org/jira/browse/HDDS-1918
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: build, test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


HDDS-1735 created separate test runner scripts for unit and integration tests.

Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
{{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
integration tests, and should be run by {{integration.sh}} instead.  There are 
currently only 3 real unit tests:

{noformat}
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
{noformat}

{{hadoop-ozone-tools}} tests take ~6 minutes.

Possible solutions in order of increasing complexity:

# Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of {{unit.sh}} 
(This is similar to {{hadoop-ozone-filesystem}}, which is already run by 
{{integration.sh}} and has 2 real unit tests.)
# Move all integration test classes to the {{hadoop-ozone-integration-test}} 
module, and make it depend on {{hadoop-ozone-tools}} and 
{{hadoop-ozone-filesystem}} instead of the other way around.
# Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901117#comment-16901117
 ] 

Doroszlai, Attila commented on HDDS-1918:
-

[~dineshchitlangia], yes, that's why I state these are the "real" unit tests.  
All the other test classes use MiniOzoneCluster, eg. {{TestRandomKeyGenerator}}.

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1918:

Comment: was deleted

(was: [~dineshchitlangia], yes, that's why I state these are the "real" unit 
tests.  All the other test classes use MiniOzoneCluster, eg. 
{{TestRandomKeyGenerator}}.)

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901121#comment-16901121
 ] 

Doroszlai, Attila commented on HDDS-1918:
-

I originally preferred #2, but it seems a bit risky currently (shortly before 
release).  In the short run I'd like to go with #1.

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1918:

Status: Patch Available  (was: Open)

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1921:
---

 Summary: TestOzoneManagerDoubleBufferWithOMResponse is flaky
 Key: HDDS-1921
 URL: https://issues.apache.org/jira/browse/HDDS-1921
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
java.lang.AssertionError: expected:<11> but was:<9>
...
at 
org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
at 
org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
{noformat}

{noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
java.lang.AssertionError: expected:<11> but was:<3>
...
at 
org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
at 
org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1921:

Status: Patch Available  (was: Open)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1924) ozone sh bucket path command does not exist

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1924:

Target Version/s: 0.4.1  (was: 0.4.0)
  Status: Patch Available  (was: In Progress)

> ozone sh bucket path command does not exist
> ---
>
> Key: HDDS-1924
> URL: https://issues.apache.org/jira/browse/HDDS-1924
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation, Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone sh bucket path command does not exist but it is mentioned in the 
> static/docs/interface/s3.html. The command should either be added back or a 
> the documentation should be improved.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1924) ozone sh bucket path command does not exist

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1924:

Component/s: documentation

> ozone sh bucket path command does not exist
> ---
>
> Key: HDDS-1924
> URL: https://issues.apache.org/jira/browse/HDDS-1924
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation, Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> ozone sh bucket path command does not exist but it is mentioned in the 
> static/docs/interface/s3.html. The command should either be added back or a 
> the documentation should be improved.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1924) ozone sh bucket path command does not exist

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1924:
---

Assignee: Doroszlai, Attila

> ozone sh bucket path command does not exist
> ---
>
> Key: HDDS-1924
> URL: https://issues.apache.org/jira/browse/HDDS-1924
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> ozone sh bucket path command does not exist but it is mentioned in the 
> static/docs/interface/s3.html. The command should either be added back or a 
> the documentation should be improved.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1924) ozone sh bucket path command does not exist

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1924 started by Doroszlai, Attila.
---
> ozone sh bucket path command does not exist
> ---
>
> Key: HDDS-1924
> URL: https://issues.apache.org/jira/browse/HDDS-1924
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> ozone sh bucket path command does not exist but it is mentioned in the 
> static/docs/interface/s3.html. The command should either be added back or a 
> the documentation should be improved.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1925:
---

 Summary: ozonesecure acceptance test broken by HTTP auth 
requirement
 Key: HDDS-1925
 URL: https://issues.apache.org/jira/browse/HDDS-1925
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: docker, test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila


Acceptance test is failing at {{ozonesecure}} with the following error from 
{{jq}}:

{noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
parse error: Invalid numeric literal at line 2, column 0
{noformat}

Example compose environments wait for datanodes to be up:

{code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
  docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
  wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
{code}

The number of datanodes up is determined via HTTP query of JMX endpoint:

{code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
 #This line checks the number of HEALTHY datanodes registered in scm over 
the
 # jmx HTTP servlet
 datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
 | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
{code}

The problem is that no authentication is performed before or during the 
request, which is no longer allowed since HDDS-1901:

{code}
$ docker-compose exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'



Error 401 Authentication required

HTTP ERROR 401
Problem accessing /jmx. Reason:
Authentication required


{code}

{code}
$ docker-compose exec -T scm curl -s 
'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
 | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
parse error: Invalid numeric literal at line 2, column 0
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1925:
---

Assignee: Doroszlai, Attila

> ozonesecure acceptance test broken by HTTP auth requirement
> ---
>
> Key: HDDS-1925
> URL: https://issues.apache.org/jira/browse/HDDS-1925
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>
> Acceptance test is failing at {{ozonesecure}} with the following error from 
> {{jq}}:
> {noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
> parse error: Invalid numeric literal at line 2, column 0
> {noformat}
> Example compose environments wait for datanodes to be up:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
>   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
>   wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
> {code}
> The number of datanodes up is determined via HTTP query of JMX endpoint:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
>  #This line checks the number of HEALTHY datanodes registered in scm over 
> the
>  # jmx HTTP servlet
>  datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
> {code}
> The problem is that no authentication is performed before or during the 
> request, which is no longer allowed since HDDS-1901:
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
> 
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing /jmx. Reason:
> Authentication required
> 
> 
> {code}
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
> parse error: Invalid numeric literal at line 2, column 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1925 started by Doroszlai, Attila.
---
> ozonesecure acceptance test broken by HTTP auth requirement
> ---
>
> Key: HDDS-1925
> URL: https://issues.apache.org/jira/browse/HDDS-1925
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>
> Acceptance test is failing at {{ozonesecure}} with the following error from 
> {{jq}}:
> {noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
> parse error: Invalid numeric literal at line 2, column 0
> {noformat}
> Example compose environments wait for datanodes to be up:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
>   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
>   wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
> {code}
> The number of datanodes up is determined via HTTP query of JMX endpoint:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
>  #This line checks the number of HEALTHY datanodes registered in scm over 
> the
>  # jmx HTTP servlet
>  datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
> {code}
> The problem is that no authentication is performed before or during the 
> request, which is no longer allowed since HDDS-1901:
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
> 
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing /jmx. Reason:
> Authentication required
> 
> 
> {code}
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
> parse error: Invalid numeric literal at line 2, column 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-07 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1928:
---

 Summary: Cannot run ozone-recon compose due to syntax error
 Key: HDDS-1928
 URL: https://issues.apache.org/jira/browse/HDDS-1928
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: docker
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{noformat}
$ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
$ docker-compose up -d --scale datanode=3
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
  in "./docker-compose.yaml", line 20, column 33
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1928 started by Doroszlai, Attila.
---
> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1928:

Status: Patch Available  (was: In Progress)

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1929) OM started on recon host in ozonesecure compose

2019-08-07 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1929:
---

 Summary: OM started on recon host in ozonesecure compose 
 Key: HDDS-1929
 URL: https://issues.apache.org/jira/browse/HDDS-1929
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: docker
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


OM is started temporarily on {{recon}} host in {{ozonesecure}} compose:

{noformat}
recon_1 | 2019-08-07 19:41:46 INFO  OzoneManagerStarter:51 - STARTUP_MSG:
recon_1 | /
recon_1 | STARTUP_MSG: Starting OzoneManager
recon_1 | STARTUP_MSG:   host = recon/192.168.16.4
recon_1 | STARTUP_MSG:   args = [--init]
...
recon_1 | SHUTDOWN_MSG: Shutting down OzoneManager at recon/192.168.16.4
...
recon_1 | 2019-08-07 19:41:52 INFO  ReconServer:81 - Initializing Recon 
server...
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1929) OM started on recon host in ozonesecure compose

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1929 started by Doroszlai, Attila.
---
> OM started on recon host in ozonesecure compose 
> 
>
> Key: HDDS-1929
> URL: https://issues.apache.org/jira/browse/HDDS-1929
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> OM is started temporarily on {{recon}} host in {{ozonesecure}} compose:
> {noformat}
> recon_1 | 2019-08-07 19:41:46 INFO  OzoneManagerStarter:51 - STARTUP_MSG:
> recon_1 | /
> recon_1 | STARTUP_MSG: Starting OzoneManager
> recon_1 | STARTUP_MSG:   host = recon/192.168.16.4
> recon_1 | STARTUP_MSG:   args = [--init]
> ...
> recon_1 | SHUTDOWN_MSG: Shutting down OzoneManager at recon/192.168.16.4
> ...
> recon_1 | 2019-08-07 19:41:52 INFO  ReconServer:81 - Initializing Recon 
> server...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1929) OM started on recon host in ozonesecure compose

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1929:

Status: Patch Available  (was: In Progress)

> OM started on recon host in ozonesecure compose 
> 
>
> Key: HDDS-1929
> URL: https://issues.apache.org/jira/browse/HDDS-1929
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> OM is started temporarily on {{recon}} host in {{ozonesecure}} compose:
> {noformat}
> recon_1 | 2019-08-07 19:41:46 INFO  OzoneManagerStarter:51 - STARTUP_MSG:
> recon_1 | /
> recon_1 | STARTUP_MSG: Starting OzoneManager
> recon_1 | STARTUP_MSG:   host = recon/192.168.16.4
> recon_1 | STARTUP_MSG:   args = [--init]
> ...
> recon_1 | SHUTDOWN_MSG: Shutting down OzoneManager at recon/192.168.16.4
> ...
> recon_1 | 2019-08-07 19:41:52 INFO  ReconServer:81 - Initializing Recon 
> server...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1925:

Status: Patch Available  (was: In Progress)

> ozonesecure acceptance test broken by HTTP auth requirement
> ---
>
> Key: HDDS-1925
> URL: https://issues.apache.org/jira/browse/HDDS-1925
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Acceptance test is failing at {{ozonesecure}} with the following error from 
> {{jq}}:
> {noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
> parse error: Invalid numeric literal at line 2, column 0
> {noformat}
> Example compose environments wait for datanodes to be up:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
>   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
>   wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
> {code}
> The number of datanodes up is determined via HTTP query of JMX endpoint:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
>  #This line checks the number of HEALTHY datanodes registered in scm over 
> the
>  # jmx HTTP servlet
>  datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
> {code}
> The problem is that no authentication is performed before or during the 
> request, which is no longer allowed since HDDS-1901:
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
> 
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing /jmx. Reason:
> Authentication required
> 
> 
> {code}
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
> parse error: Invalid numeric literal at line 2, column 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1931) Recon cannot download OM DB snapshot in ozonesecure

2019-08-07 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1931:
---

 Summary: Recon cannot download OM DB snapshot in ozonesecure 
 Key: HDDS-1931
 URL: https://issues.apache.org/jira/browse/HDDS-1931
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: docker, Ozone Recon
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila


{code}
recon_1 | 2019-08-07 22:09:40 ERROR OzoneManagerServiceProviderImpl:186 - 
Unable to obtain Ozone Manager DB Snapshot.
recon_1 | java.io.IOException: Unexpected exception when trying to reach 
Ozone Manager, 
recon_1 | 
recon_1 | 
recon_1 | Error 401 Authentication required
recon_1 | 
recon_1 | HTTP ERROR 401
recon_1 | Problem accessing /dbCheckpoint. Reason:
recon_1 | Authentication required
recon_1 | 
recon_1 | 
recon_1 |
recon_1 |   at 
org.apache.hadoop.ozone.recon.ReconUtils.makeHttpCall(ReconUtils.java:171)
recon_1 |   at 
org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.getOzoneManagerDBSnapshot(OzoneManagerServiceProviderImpl.java:170)
recon_1 |   at 
org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.updateReconOmDBWithNewSnapshot(OzoneManagerServiceProviderImpl.java:141)
recon_1 |   at 
org.apache.hadoop.ozone.recon.ReconServer.lambda$scheduleReconTasks$1(ReconServer.java:138)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1934:
---

 Summary: TestSecureOzoneCluster may fail due to port conflict
 Key: HDDS-1934
 URL: https://issues.apache.org/jira/browse/HDDS-1934
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{{TestSecureOzoneCluster}} fails if SCM is already running on same host.

Steps to reproduce:

# Start {{ozone}} docker compose cluster
# Run {{TestSecureOzoneCluster}} test

{noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
[ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 49.821 
s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster) 
 Time elapsed: 6.59 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
at 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
...

[ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
Time elapsed: 5.312 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
at 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
...

[ERROR] testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster) 
 Time elapsed: 5.312 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
at 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
...
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1934 started by Doroszlai, Attila.
---
> TestSecureOzoneCluster may fail due to port conflict
> 
>
> Key: HDDS-1934
> URL: https://issues.apache.org/jira/browse/HDDS-1934
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> {{TestSecureOzoneCluster}} fails if SCM is already running on same host.
> Steps to reproduce:
> # Start {{ozone}} docker compose cluster
> # Run {{TestSecureOzoneCluster}} test
> {noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
> [ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 49.821 s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> [ERROR] 
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 6.59 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
> ...
> [ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
> Time elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
> ...
> [ERROR] 
> testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1934:

Status: Patch Available  (was: In Progress)

> TestSecureOzoneCluster may fail due to port conflict
> 
>
> Key: HDDS-1934
> URL: https://issues.apache.org/jira/browse/HDDS-1934
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestSecureOzoneCluster}} fails if SCM is already running on same host.
> Steps to reproduce:
> # Start {{ozone}} docker compose cluster
> # Run {{TestSecureOzoneCluster}} test
> {noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
> [ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 49.821 s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> [ERROR] 
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 6.59 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
> ...
> [ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
> Time elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
> ...
> [ERROR] 
> testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1936) ozonesecure s3 test fails intermittently

2019-08-08 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1936:
---

 Summary: ozonesecure s3 test fails intermittently
 Key: HDDS-1936
 URL: https://issues.apache.org/jira/browse/HDDS-1936
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila


Sometimes acceptance tests fail at ozonesecure s3 test, starting with:

{code:title=https://ci.anzix.net/job/ozone/17607/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s18-s1-t1-k3-k1-k2}
Completed 29 Bytes/29 Bytes (6 Bytes/s) with 1 file(s) remaining
upload failed: ../../tmp/testfile to s3://bucket-07853/testfile An error 
occurred (500) when calling the PutObject operation (reached max retries: 4): 
Internal Server Error
{code}

followed by:

{code:title=https://ci.anzix.net/job/ozone/17607/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s18-s5-t1}
('Connection aborted.', error(32, 'Broken pipe'))
{code}

in subsequent test cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-08 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903018#comment-16903018
 ] 

Doroszlai, Attila commented on HDDS-1925:
-

Thanks [~nandakumar131].

> ozonesecure acceptance test broken by HTTP auth requirement
> ---
>
> Key: HDDS-1925
> URL: https://issues.apache.org/jira/browse/HDDS-1925
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Acceptance test is failing at {{ozonesecure}} with the following error from 
> {{jq}}:
> {noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
> parse error: Invalid numeric literal at line 2, column 0
> {noformat}
> Example compose environments wait for datanodes to be up:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
>   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
>   wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
> {code}
> The number of datanodes up is determined via HTTP query of JMX endpoint:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
>  #This line checks the number of HEALTHY datanodes registered in scm over 
> the
>  # jmx HTTP servlet
>  datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
> {code}
> The problem is that no authentication is performed before or during the 
> request, which is no longer allowed since HDDS-1901:
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
> 
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing /jmx. Reason:
> Authentication required
> 
> 
> {code}
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
> parse error: Invalid numeric literal at line 2, column 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1937) Acceptance tests fail if scm webui shows invalid json

2019-08-08 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903097#comment-16903097
 ] 

Doroszlai, Attila commented on HDDS-1937:
-

This run didn't include the fix for HDDS-1925:
https://github.com/elek/ozone-ci/blob/master/byscane/byscane-nightly-5b87q/HEAD.txt

> Acceptance tests fail if scm webui shows invalid json
> -
>
> Key: HDDS-1937
> URL: https://issues.apache.org/jira/browse/HDDS-1937
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Acceptance test of a nightly build is failed with the following error:
> {code}
> Creating ozonesecure_datanode_3 ... 
> 
> Creating ozonesecure_kdc_1  ... done
> 
> Creating ozonesecure_om_1   ... done
> 
> Creating ozonesecure_scm_1  ... done
> 
> Creating ozonesecure_datanode_3 ... done
> 
> Creating ozonesecure_kms_1  ... done
> 
> Creating ozonesecure_s3g_1  ... done
> 
> Creating ozonesecure_datanode_2 ... done
> 
> Creating ozonesecure_datanode_1 ... done
> parse error: Invalid numeric literal at line 2, column 0
> {code}
> https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-5b87q/acceptance/output.log
> The problem is in the script which checks the number of available datanodes.
> If the HTTP endpoint of the SCM is already started BUT not ready yet it may 
> return with a simple HTML error message instead of json. Which can not be 
> parsed by jq:
> In testlib.sh:
> {code}
>   37   │   if [[ "${SECURITY_ENABLED}" == 'true' ]]; then
>   38   │ docker-compose -f "${compose_file}" exec -T scm bash -c "kinit 
> -k HTTP/scm@EXAMPL
>│ E.COM -t /etc/security/keytabs/HTTP.keytab && curl --negotiate -u : 
> -s '${jmx_url}'"
>   39   │   else
>   40   │ docker-compose -f "${compose_file}" exec -T scm curl -s 
> "${jmx_url}"
>   41   │   fi \
>   42   │ | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | 
> .value'
> {code}
> One possible fix is to adjust the error handling (set +x / set -x) per method 
> instead of using a generic set -x at the beginning. It would provide a more 
> predictable behavior. In our case count_datanode should not fail evert (as 
> the caller method: wait_for_datanodes can retry anyway).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1936) ozonesecure s3 test fails intermittently

2019-08-08 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903194#comment-16903194
 ] 

Doroszlai, Attila commented on HDDS-1936:
-

This may happen eg. if 2 of the 3 datanodes die:
{noformat}
scm_1   | 2019-08-08 15:50:48 INFO  BlockManagerImpl:198 - Could not find 
available pipeline of type:RATIS and factor:THREE even after retrying
scm_1   | 2019-08-08 15:50:48 ERROR BlockManagerImpl:222 - Unable to 
allocate a block for the size: 268435456, type: RATIS, factor: THREE
s3g_1   | 2019-08-08 15:50:48 ERROR ObjectEndpoint:182 - Exception occurred 
in PutObject
s3g_1   | INTERNAL_ERROR org.apache.hadoop.ozone.om.exceptions.OMException: 
Allocated 0 blocks. Requested 1 blocks
...
s3g_1   | 2019-08-08 15:50:48 WARN  HttpChannel:499 - 
//s3g:9878/bucket-74650/testfile
s3g_1   | javax.servlet.ServletException: javax.servlet.ServletException: 
org.glassfish.jersey.server.ContainerException: INTERNAL_ERROR 
org.apache.hadoop.ozone.om.exceptions.OMException: Allocated 0 blocks. 
Requested 1 blocks
{noformat}

> ozonesecure s3 test fails intermittently
> 
>
> Key: HDDS-1936
> URL: https://issues.apache.org/jira/browse/HDDS-1936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Priority: Major
>
> Sometimes acceptance tests fail at ozonesecure s3 test, starting with:
> {code:title=https://ci.anzix.net/job/ozone/17607/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s18-s1-t1-k3-k1-k2}
> Completed 29 Bytes/29 Bytes (6 Bytes/s) with 1 file(s) remaining
> upload failed: ../../tmp/testfile to s3://bucket-07853/testfile An error 
> occurred (500) when calling the PutObject operation (reached max retries: 4): 
> Internal Server Error
> {code}
> followed by:
> {code:title=https://ci.anzix.net/job/ozone/17607/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s18-s5-t1}
> ('Connection aborted.', error(32, 'Broken pipe'))
> {code}
> in subsequent test cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1940) `scmcli close` gives false error message

2019-08-09 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1940:
---

 Summary: `scmcli close` gives false error message
 Key: HDDS-1940
 URL: https://issues.apache.org/jira/browse/HDDS-1940
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila


{{scmcli close}} prints an error message about invalid state transition after 
it had successfully closed the container.

{code:title=CLI}
$ ozone scmcli info 2
...
Container State: OPEN
...

$ ozone scmcli close 2
...
client-09830A377AA9->f27bf787-8711-41d4-b0fd-3ef50b5c076f: receive 
RaftClientReply:client-09830A377AA9->f27bf787-8711-41d4-b0fd-3ef50b5c076f@group-7831D6F2EF1B,
 cid=0, SUCCESS, logIndex=11, commits[f27bf787-8711-41d4-b0fd-3ef50b5c076f:c12, 
37ba33fe-c9ed-4ac2-a6e5-57ce658168b4:c11, 
feb68ba4-0a8a-4eda-9915-7dc090e5f46c:c11]
Failed to update container state #2, reason: invalid state transition from 
state: CLOSED upon event: CLOSE.

$ ozone scmcli info 2
...
Container State: CLOSED
...
{code}

{code:title=logs}
scm_1  | 2019-08-09 15:15:01 [IPC Server handler 1 on 9860] INFO  
SCMClientProtocolServer:366 - Object type container id 1 op close new stage 
begin
dn3_1  | 2019-08-09 15:15:02 [RatisApplyTransactionExecutor 1] INFO  
Container:356 - Container 1 is closed with bcsId 3.
dn1_1  | 2019-08-09 15:15:02 [RatisApplyTransactionExecutor 1] INFO  
Container:356 - Container 1 is closed with bcsId 3.
scm_1  | 2019-08-09 15:15:02 
[EventQueue-IncrementalContainerReportForIncrementalContainerReportHandler] 
INFO  IncrementalContainerReportHandler:176 - Moving container #1 to CLOSED 
state, datanode feb68ba4-0a8a-4eda-9915-7dc090e5f46c{ip: 10.5.1.6, host: 
ozone-static_dn3_1.ozone-static_net, networkLocation: /default-rack, 
certSerialId: null} reported CLOSED replica.
dn2_1  | 2019-08-09 15:15:02 [RatisApplyTransactionExecutor 1] INFO  
Container:356 - Container 1 is closed with bcsId 3.
scm_1  | 2019-08-09 15:15:02 [IPC Server handler 3 on 9860] INFO  
SCMClientProtocolServer:366 - Object type container id 1 op close new stage 
complete
scm_1  | 2019-08-09 15:15:02 [IPC Server handler 3 on 9860] ERROR 
ContainerStateManager:335 - Failed to update container state #1, reason: 
invalid state transition from state: CLOSED upon event: CLOSE.
scm_1  | 2019-08-09 15:15:02 [IPC Server handler 3 on 9860] INFO  Server:2726 - 
IPC Server handler 3 on 9860, call Call#3 Retry#0 
org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.notifyObjectStageChange
 from 10.5.0.71:57746
scm_1  | org.apache.hadoop.hdds.scm.exceptions.SCMException: Failed to update 
container state #1, reason: invalid state transition from state: CLOSED upon 
event: CLOSE.
scm_1  |at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerState(ContainerStateManager.java:336)
scm_1  |at 
org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerState(SCMContainerManager.java:312)
scm_1  |at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.notifyObjectStageChange(SCMClientProtocolServer.java:379)
scm_1  |at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.notifyObjectStageChange(StorageContainerLocationProtocolServerSideTranslatorPB.java:219)
scm_1  |at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16398)
scm_1  |at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
scm_1  |at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
scm_1  |at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
scm_1  |at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
scm_1  |at java.base/java.security.AccessController.doPrivileged(Native 
Method)
scm_1  |at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
scm_1  |at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
scm_1  |at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-09 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1941:
---

 Summary: Unused executor in SimpleContainerDownloader
 Key: HDDS-1941
 URL: https://issues.apache.org/jira/browse/HDDS-1941
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{{SimpleContainerDownloader}} has an {{executor}} that's created and shut down, 
but never used.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1322) Hugo errors when building Ozone

2019-03-27 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1322:
---

Assignee: Doroszlai, Attila

> Hugo errors when building Ozone
> ---
>
> Key: HDDS-1322
> URL: https://issues.apache.org/jira/browse/HDDS-1322
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-1322.001.patch
>
>
> I see some odd hugo errors when building Ozone, even though I am not building 
> docs.
> {code}
> $ mvn -B -q clean compile install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dmaven.site.skip=true -DskipShade -Phdds
> Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
> Run 'hugo --help' for usage.
> .../hadoop-hdds/docs/target
> Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
> Run 'hugo --help' for usage.
> .../hadoop-hdds/docs/target
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1322) Hugo errors when building Ozone

2019-03-27 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1322:

Attachment: HDDS-1322.001.patch

> Hugo errors when building Ozone
> ---
>
> Key: HDDS-1322
> URL: https://issues.apache.org/jira/browse/HDDS-1322
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-1322.001.patch
>
>
> I see some odd hugo errors when building Ozone, even though I am not building 
> docs.
> {code}
> $ mvn -B -q clean compile install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dmaven.site.skip=true -DskipShade -Phdds
> Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
> Run 'hugo --help' for usage.
> .../hadoop-hdds/docs/target
> Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
> Run 'hugo --help' for usage.
> .../hadoop-hdds/docs/target
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1322) Hugo errors when building Ozone

2019-03-27 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1322:

Status: Patch Available  (was: Open)

{{generate-site.sh}} passes all arguments to {{hugo}}, which doesn't expect 
HDDS version number and target directory.  I think these two parameters are 
unnecessary, hence the patch removes them.

The problem is reproducible using:

{code:title=mvn -Phdds -pl :hadoop-hdds-docs clean compile}
...
[INFO] --- exec-maven-plugin:1.6.0:exec (default) @ hadoop-hdds-docs ---
Error: unknown command "0.5.0-SNAPSHOT" for "hugo"
Run 'hugo --help' for usage.
...
[INFO] BUILD SUCCESS
{code}

With the fix, the docs can be generated using Maven:

{code:title=mvn -Phdds -pl :hadoop-hdds-docs clean compile}
...
[INFO] --- exec-maven-plugin:1.6.0:exec (default) @ hadoop-hdds-docs ---
Building sites …
...
   | EN
+--++
  Pages| 27
  Paginator pages  |  0
  Non-page files   |  0
  Static files | 18
  Processed images |  0
  Aliases  |  0
  Sitemaps |  1
  Cleaned  |  0

Total in 49 ms
{code}

> Hugo errors when building Ozone
> ---
>
> Key: HDDS-1322
> URL: https://issues.apache.org/jira/browse/HDDS-1322
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-1322.001.patch
>
>
> I see some odd hugo errors when building Ozone, even though I am not building 
> docs.
> {code}
> $ mvn -B -q clean compile install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dmaven.site.skip=true -DskipShade -Phdds
> Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
> Run 'hugo --help' for usage.
> .../hadoop-hdds/docs/target
> Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
> Run 'hugo --help' for usage.
> .../hadoop-hdds/docs/target
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1351:
---

 Summary: NoClassDefFoundError when running ozone genconf
 Key: HDDS-1351
 URL: https://issues.apache.org/jira/browse/HDDS-1351
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{{ozone genconf}} fails due to incomplete classpath.

Steps to reproduce:

# [build and run 
Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
# run {{ozone genconf}} in one of the containers:

{code}
$ ozone genconf /tmp
Exception in thread "main" java.lang.NoClassDefFoundError: 
com/sun/xml/bind/v2/model/annotation/AnnotationReader
  at java.lang.ClassLoader.defineClass1(Native Method)
...
  at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
  at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
  at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
  at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
  at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
  at 
org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
  at 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
  at 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
  at 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
  at picocli.CommandLine.execute(CommandLine.java:919)
...
  at 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
Caused by: java.lang.ClassNotFoundException: 
com.sun.xml.bind.v2.model.annotation.AnnotationReader
  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  ... 36 more
{code}

{{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
{{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1351 started by Doroszlai, Attila.
---
> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Affects Version/s: 0.4.0

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804238#comment-16804238
 ] 

Doroszlai, Attila commented on HDDS-1351:
-

[~xyao], try eg. the {{ozones3}} jdk8-based compose file to reproduce the issue.

However, you are right, jdk11 makes it worse, because with that {{jaxb-api}} is 
missing, and it fails even earlier:

{code:title=docker exec ozone_datanode_1 ozone genconf /tmp}
Error: Unable to initialize main class 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations
Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
{code}

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Attachment: HDDS-1351.001.patch

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-1351.001.patch
>
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Affects Version/s: 0.5.0
 Target Version/s: 0.4.0, 0.5.0
   Status: Patch Available  (was: In Progress)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Attachment: HDDS-1351.002.patch

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch, HDDS-1351.002.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-29 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805308#comment-16805308
 ] 

Doroszlai, Attila commented on HDDS-1351:
-

Thank you [~ajayydv] and [~xyao] for the review, and [~xyao] for committing it.

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-1351.001.patch, HDDS-1351.002.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1910) Cannot build hadoop-hdds-config from scratch in IDEA

2019-08-10 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1910:

Status: Patch Available  (was: In Progress)

> Cannot build hadoop-hdds-config from scratch in IDEA
> 
>
> Key: HDDS-1910
> URL: https://issues.apache.org/jira/browse/HDDS-1910
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Building {{hadoop-hdds-config}} from scratch (eg. right after checkout or 
> after {{mvn clean}}) in IDEA fails with the following error:
> {code}
> Error:java: Bad service configuration file, or exception thrown while 
> constructing Processor object: javax.annotation.processing.Processor: 
> Provider org.apache.hadoop.hdds.conf.ConfigFileGenerator not found
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1949) Missing or error-prone test cleanup

2019-08-10 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1949:
---

 Summary: Missing or error-prone test cleanup
 Key: HDDS-1949
 URL: https://issues.apache.org/jira/browse/HDDS-1949
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Some integration tests do not clean up after themselves.  Some only clean up if 
the test is successful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1949) Missing or error-prone test cleanup

2019-08-10 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1949 started by Doroszlai, Attila.
---
> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-10 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1952:
---

 Summary: TestMiniChaosOzoneCluster may run until OOME
 Key: HDDS-1952
 URL: https://issues.apache.org/jira/browse/HDDS-1952
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila


{{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 1 
minute, but it may run indefinitely until JVM crashes due to OutOfMemoryError.

In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
the remaining one run due to some other error).

Latest:
https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662

When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1954:
---

 Summary: StackOverflowError in OzoneClientInvocationHandler
 Key: HDDS-1954
 URL: https://issues.apache.org/jira/browse/HDDS-1954
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.

{code}
SLF4J: Failed toString() invocation on an object of type 
[com.sun.proxy.$Proxy85]
Reported exception:
java.lang.StackOverflowError
...
at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
at com.sun.proxy.$Proxy85.toString(Unknown Source)
at 
org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
at 
org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
at 
org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
at 
org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
at com.sun.proxy.$Proxy85.toString(Unknown Source)
...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1908) TestMultiBlockWritesWithDnFailures is failing

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1908:
---

Assignee: Doroszlai, Attila

> TestMultiBlockWritesWithDnFailures is failing
> -
>
> Key: HDDS-1908
> URL: https://issues.apache.org/jira/browse/HDDS-1908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>
> TestMultiBlockWritesWithDnFailures is failing with the following exception
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 30.992 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures
> [ERROR] 
> testMultiBlockWritesWithDnFailures(org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures)
>   Time elapsed: 30.941 s  <<< ERROR!
> INTERNAL_ERROR org.apache.hadoop.ozone.om.exceptions.OMException: Allocated 0 
> blocks. Requested 1 blocks
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:720)
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:752)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateNewBlock(BlockOutputStreamEntryPool.java:248)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateBlockIfNeeded(BlockOutputStreamEntryPool.java:296)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:201)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleRetry(KeyOutputStream.java:376)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleException(KeyOutputStream.java:325)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:231)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:193)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>   at java.io.OutputStream.write(OutputStream.java:75)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures.testMultiBlockWritesWithDnFailures(TestMultiBlockWritesWithDnFailures.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-is

[jira] [Work started] (HDDS-1908) TestMultiBlockWritesWithDnFailures is failing

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1908 started by Doroszlai, Attila.
---
> TestMultiBlockWritesWithDnFailures is failing
> -
>
> Key: HDDS-1908
> URL: https://issues.apache.org/jira/browse/HDDS-1908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>
> TestMultiBlockWritesWithDnFailures is failing with the following exception
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 30.992 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures
> [ERROR] 
> testMultiBlockWritesWithDnFailures(org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures)
>   Time elapsed: 30.941 s  <<< ERROR!
> INTERNAL_ERROR org.apache.hadoop.ozone.om.exceptions.OMException: Allocated 0 
> blocks. Requested 1 blocks
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:720)
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:752)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateNewBlock(BlockOutputStreamEntryPool.java:248)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateBlockIfNeeded(BlockOutputStreamEntryPool.java:296)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:201)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleRetry(KeyOutputStream.java:376)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleException(KeyOutputStream.java:325)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:231)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:193)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>   at java.io.OutputStream.write(OutputStream.java:75)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures.testMultiBlockWritesWithDnFailures(TestMultiBlockWritesWithDnFailures.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@h

[jira] [Updated] (HDDS-1908) TestMultiBlockWritesWithDnFailures is failing

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1908:

Status: Patch Available  (was: In Progress)

> TestMultiBlockWritesWithDnFailures is failing
> -
>
> Key: HDDS-1908
> URL: https://issues.apache.org/jira/browse/HDDS-1908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestMultiBlockWritesWithDnFailures is failing with the following exception
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 30.992 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures
> [ERROR] 
> testMultiBlockWritesWithDnFailures(org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures)
>   Time elapsed: 30.941 s  <<< ERROR!
> INTERNAL_ERROR org.apache.hadoop.ozone.om.exceptions.OMException: Allocated 0 
> blocks. Requested 1 blocks
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:720)
>   at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:752)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateNewBlock(BlockOutputStreamEntryPool.java:248)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.allocateBlockIfNeeded(BlockOutputStreamEntryPool.java:296)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:201)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleRetry(KeyOutputStream.java:376)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleException(KeyOutputStream.java:325)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:231)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:193)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>   at java.io.OutputStream.write(OutputStream.java:75)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures.testMultiBlockWritesWithDnFailures(TestMultiBlockWritesWithDnFailures.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#7

[jira] [Work started] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1954 started by Doroszlai, Attila.
---
> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>
> Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [com.sun.proxy.$Proxy85]
> Reported exception:
> java.lang.StackOverflowError
> ...
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1780) TestFailureHandlingByClient tests are flaky

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1780:

Fix Version/s: 0.4.1

> TestFailureHandlingByClient tests are flaky
> ---
>
> Key: HDDS-1780
> URL: https://issues.apache.org/jira/browse/HDDS-1780
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The tests seem to fail bcoz , when the datanode goes down with stale node 
> interval being set to a low value, containers may get closed early and client 
> writes might fail with closed container exception rather than pipeline 
> failure/Timeout exceptions as excepted in the tests. The fix made here is to 
> tune the stale node interval.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1954) StackOverflowError in OzoneClientInvocationHandler

2019-08-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1954:

Status: Patch Available  (was: In Progress)

> StackOverflowError in OzoneClientInvocationHandler
> --
>
> Key: HDDS-1954
> URL: https://issues.apache.org/jira/browse/HDDS-1954
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Happens if log level for {{org.apache.hadoop.ozone.client}} is set to TRACE.
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [com.sun.proxy.$Proxy85]
> Reported exception:
> java.lang.StackOverflowError
> ...
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.trace(Log4jLoggerAdapter.java:156)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:51)
>   at com.sun.proxy.$Proxy85.toString(Unknown Source)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1956) Aged IO Thread exits on first write

2019-08-13 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1956:
---

 Summary: Aged IO Thread exits on first write 
 Key: HDDS-1956
 URL: https://issues.apache.org/jira/browse/HDDS-1956
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first write due to 
exception:

{code}
2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
Aged IO Thread:2139.
...
2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 Exiting 
due to exception
java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
thread:2139.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1956) Aged IO Thread exits on first write

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1956 started by Doroszlai, Attila.
---
> Aged IO Thread exits on first write 
> 
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first write due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1956:

Description: 
Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
exception:

{code}
2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
Aged IO Thread:2139.
...
2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 Exiting 
due to exception
java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
thread:2139.
{code}

  was:
Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first write due to 
exception:

{code}
2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
Aged IO Thread:2139.
...
2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 Exiting 
due to exception
java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
thread:2139.
{code}


> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1956:

Summary: Aged IO Thread exits on first read  (was: Aged IO Thread exits on 
first write )

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first write due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1956:

Status: Patch Available  (was: In Progress)

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-13 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906203#comment-16906203
 ] 

Doroszlai, Attila commented on HDDS-1952:
-

[~nandakumar131], please note that my pull request is just a "workaround".  I 
think it would be nice to investigate the underlying problem.  I'm fine with 
filing a new Jira issue for that.

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1952
> URL: https://issues.apache.org/jira/browse/HDDS-1952
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1960) TestMiniChaosOzoneCluster may run until OOME

2019-08-13 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1960:
---

 Summary: TestMiniChaosOzoneCluster may run until OOME
 Key: HDDS-1960
 URL: https://issues.apache.org/jira/browse/HDDS-1960
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila
 Fix For: 0.4.1, 0.5.0


{{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 1 
minute, but it may run indefinitely until JVM crashes due to OutOfMemoryError.

In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
the remaining one run due to some other error).

Latest:
https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662

When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1960) TestMiniChaosOzoneCluster may run until OOME

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1960:

Fix Version/s: (was: 0.4.1)
   (was: 0.5.0)

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1960
> URL: https://issues.apache.org/jira/browse/HDDS-1960
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1960) TestMiniChaosOzoneCluster may run until OOME

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1960:

Labels:   (was: pull-request-available)

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1960
> URL: https://issues.apache.org/jira/browse/HDDS-1960
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
> Fix For: 0.4.1, 0.5.0
>
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1960) TestMiniChaosOzoneCluster may run until OOME

2019-08-13 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1960:
---

Assignee: (was: Doroszlai, Attila)

> TestMiniChaosOzoneCluster may run until OOME
> 
>
> Key: HDDS-1960
> URL: https://issues.apache.org/jira/browse/HDDS-1960
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Priority: Critical
>
> {{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 
> 1 minute, but it may run indefinitely until JVM crashes due to 
> OutOfMemoryError.
> In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
> the remaining one run due to some other error).
> Latest:
> https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662
> When it crashes, it leaves GBs of data lying around.
> HDDS-1952 disabled this test in CI runs.  It can still be run manually (eg. 
> {{mvn -Phdds -pl :hadoop-ozone-integration-test 
> -Dtest=TestMiniChaosOzoneCluster test}}).  The goal of this task is to 
> investigate the root cause of the runaway nature of this test.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   >