[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238616=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238616
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 07/May/19 15:56
Start Date: 07/May/19 15:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-490140291
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/726 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/7/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238616)
Time Spent: 3h 20m  (was: 3h 10m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. 
> execute_robot_test or stop_docker_env) would also be 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238617
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 07/May/19 15:56
Start Date: 07/May/19 15:56
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238617)
Time Spent: 3.5h  (was: 3h 20m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. 
> execute_robot_test or stop_docker_env) would also be simplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238576
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 07/May/19 15:22
Start Date: 07/May/19 15:22
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281687356
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozones3/test.sh
 ##
 @@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
 
 Review comment:
   Ok, after some thinking I am understand. It may not be required all the time 
if we have more advanced tests. For example if the test plan contains a longer 
freon run, the basic test can be removed. 
   
   But it's fast and an additional safety level (we don't start any test if 
basic freon doesn't work), so not a big problem but we can improve it later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238576)
Time Spent: 3h 10m  (was: 3h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238569=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238569
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 07/May/19 15:15
Start Date: 07/May/19 15:15
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281683399
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozones3/test.sh
 ##
 @@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
 
 Review comment:
   Yes, it is. It is the most basic level check if the compose folder is sill 
usable. (basic test checks only the availability of the webui and do a freon 
test with 5*5*5 keys. Maybe we can decrease the numbers to 1*1*5. (1 vol, 1 
bucket, 1 key). If we can upload 5 keys, it should be fine.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238569)
Time Spent: 2h 50m  (was: 2h 40m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238570=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238570
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 07/May/19 15:19
Start Date: 07/May/19 15:19
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-490125301
 
 
   Thanks @arp7 and @xiaoyuyao the review. I will merge it with the fixed type. 
   
   And this is just the improvement for the framework. As a next step, I would 
like to :
   
1. Remove the intermittency from the acceptance test runs (now it's easier 
as it's very easy to find the report for a specific test). 

2. Fix ozonefs with `hdfs dfs` command and enable the unit test
   
3. Enable test for ozone + mapreduce (now, it should be easy based on the 
README)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238570)
Time Spent: 3h  (was: 2h 50m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. 
> execute_robot_test or stop_docker_env) would also be simplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238079=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238079
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 06/May/19 19:50
Start Date: 06/May/19 19:50
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-489751889
 
 
   +1 from me too. This will allow more acceptance tests to be added easily. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238079)
Time Spent: 2h 40m  (was: 2.5h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. 
> execute_robot_test or stop_docker_env) would also be simplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238032=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238032
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 06/May/19 19:22
Start Date: 06/May/19 19:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281307008
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonefs/test.sh
 ##
 @@ -0,0 +1,39 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm ozonefs/ozonefs.robot
+
+
+## TODO: As of the hhe o3fs tests are unstable.
 
 Review comment:
   Minor: typo.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238032)
Time Spent: 2h 10m  (was: 2h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238033
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 06/May/19 19:22
Start Date: 06/May/19 19:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281307184
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozones3/test.sh
 ##
 @@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
 
 Review comment:
   Is it deliberate to rerun the basic test within each sub-test?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238033)
Time Spent: 2h 20m  (was: 2h 10m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238034=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238034
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 06/May/19 19:24
Start Date: 06/May/19 19:24
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-489743634
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 238034)
Time Spent: 2.5h  (was: 2h 20m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. 
> execute_robot_test or stop_docker_env) would also be simplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=236398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236398
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 02/May/19 16:30
Start Date: 02/May/19 16:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r280499682
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-all.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# Test executor to test all the compose/*/test.sh test scripts.
+#
+
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
+ALL_RESULT_DIR="$SCRIPT_DIR/result"
+
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*"
+
+RESULT=0
+IFS=$'\n'
+# shellcheck disable=SC2044
+for test in $(find $SCRIPT_DIR -name test.sh); do
 
 Review comment:
   shellcheck:20: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236398)
Time Spent: 1h 40m  (was: 1.5h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=236400=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236400
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 02/May/19 16:30
Start Date: 02/May/19 16:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-488741108
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 80 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 2 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 613 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 977 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 565 | the patch passed |
   | +1 | compile | 276 | the patch passed |
   | +1 | javac | 276 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | shellcheck | 4 | The patch generated 2 new + 0 unchanged - 1 fixed = 
2 total (was 1) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 760 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 238 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1206 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 5635 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml |
   | uname | Linux 6f224256abb3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a42745 |
   | Default Java | 1.8.0_191 |
   | shellcheck | 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=236399=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236399
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 02/May/19 16:30
Start Date: 02/May/19 16:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r280499696
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-single.sh
 ##
 @@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Single test executor, can start a single robot test in any running container.
+#
+
+
+COMPOSE_DIR="$PWD"
+export COMPOSE_DIR
+
+if [[ ! -f "$COMPOSE_DIR/docker-compose.yaml" ]]; then
+echo "docker-compose.yaml is missing from the current dir. Please run this 
command from a docker-compose environment."
+exit 1
+fi
+if (( $# != 2 )); then
+cat << EOF
+   Single test executor
+
+   Usage:
+
+ ../test-single.sh  
+
+container: Name of the running docker-compose container 
(docker-compose.yaml is required in the current directory)
+
+robot_test: name of the robot test or directory relative to the 
smoketest dir.
+
+
+
+EOF
+
+fi
+
+# shellcheck source=testlib.sh
+source "$COMPOSE_DIR/../testlib.sh"
 
 Review comment:
   shellcheck:1: note: Not following: testlib.sh: openBinaryFile: does not 
exist (No such file or directory) [SC1091]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236399)
Time Spent: 1h 50m  (was: 1h 40m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=232646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232646
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 25/Apr/19 08:43
Start Date: 25/Apr/19 08:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r278450356
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-all.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# Test executor to test all the compose/*/test.sh test scripts.
+#
+
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
+ALL_RESULT_DIR="$SCRIPT_DIR/result"
+
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*"
+
+RESULT=0
+IFS=$'\n'
+# shellcheck disable=SC2044
 
 Review comment:
   shellcheck:20: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232646)
Time Spent: 1h 10m  (was: 1h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=232647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232647
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 25/Apr/19 08:43
Start Date: 25/Apr/19 08:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r278450368
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-single.sh
 ##
 @@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Single test executor, can start a single robot test in any running container.
+#
+
+
+COMPOSE_DIR="$PWD"
+export COMPOSE_DIR
+
+if [[ ! -f "$COMPOSE_DIR/docker-compose.yaml" ]]; then
+echo "docker-compose.yaml is missing from the current dir. Please run this 
command from a docker-compose environment."
+exit 1
+fi
+if (( $# != 2 )); then
+cat << EOF
+   Single test executor
+
+   Usage:
+
+ ../test-single.sh  
+
+container: Name of the running docker-compose container 
(docker-compose.yaml is required in the current directory)
+
+robot_test: name of the robot test or directory relative to the 
smoketest dir.
+
+
+
+EOF
+
+fi
+
+# shellcheck source=testlib.sh
 
 Review comment:
   shellcheck:1: note: Not following: testlib.sh: openBinaryFile: does not 
exist (No such file or directory) [SC1091]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232647)
Time Spent: 1h 20m  (was: 1h 10m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=232648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232648
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 25/Apr/19 08:43
Start Date: 25/Apr/19 08:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-486576162
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1254 | trunk passed |
   | +1 | compile | 129 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 654 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 24 | dist in the patch failed. |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | +1 | mvnsite | 23 | the patch passed |
   | -1 | shellcheck | 2 | The patch generated 2 new + 0 unchanged - 1 fixed = 
2 total (was 1) |
   | +1 | shelldocs | 22 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 19 | dist in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3139 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  mvnsite  unit  shadedclient  xml  |
   | uname | Linux 8006a03f0cac 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b3d41b |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/5/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/5/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/5/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232648)
Time Spent: 1.5h  (was: 1h 20m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=229021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-229021
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 17/Apr/19 11:11
Start Date: 17/Apr/19 11:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r276189261
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-all.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# Test executor to test all the compose/*/test.sh test scripts.
+#
+
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
+ALL_RESULT_DIR="$SCRIPT_DIR/result"
+
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*"
+
+RESULT=0
+IFS=$'\n'
+# shellcheck disable=SC2044
 
 Review comment:
   shellcheck:20: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 229021)
Time Spent: 50m  (was: 40m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=229022=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-229022
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 17/Apr/19 11:11
Start Date: 17/Apr/19 11:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-484040144
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1024 | trunk passed |
   | +1 | compile | 78 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 669 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | +1 | compile | 20 | the patch passed |
   | +1 | javac | 20 | the patch passed |
   | +1 | mvnsite | 20 | the patch passed |
   | -1 | shellcheck | 0 | The patch generated 2 new + 0 unchanged - 1 fixed = 
2 total (was 1) |
   | +1 | shelldocs | 16 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 23 | dist in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2866 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  mvnsite  unit  shadedclient  xml  |
   | uname | Linux 3ed4e60f0ff4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d608be6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/4/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/4/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/4/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 229022)
Time Spent: 1h  (was: 50m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=229020=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-229020
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 17/Apr/19 11:11
Start Date: 17/Apr/19 11:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r276189269
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-single.sh
 ##
 @@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Single test executor, can start a single robot test in any running container.
+#
+
+
+COMPOSE_DIR="$PWD"
+export COMPOSE_DIR
+
+if [[ ! -f "$COMPOSE_DIR/docker-compose.yaml" ]]; then
+echo "docker-compose.yaml is missing from the current dir. Please run this 
command from a docker-compose environment."
+exit 1
+fi
+if (( $# != 2 )); then
+cat << EOF
+   Single test executor
+
+   Usage:
+
+ ../test-single.sh  
+
+container: Name of the running docker-compose container 
(docker-compose.yaml is required in the current directory)
+
+robot_test: name of the robot test or directory relative to the 
smoketest dir.
+
+
+
+EOF
+
+fi
+
+# shellcheck source=testlib.sh
 
 Review comment:
   shellcheck:1: note: Not following: testlib.sh: openBinaryFile: does not 
exist (No such file or directory) [SC1091]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 229020)
Time Spent: 40m  (was: 0.5h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=226676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-226676
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 12/Apr/19 14:05
Start Date: 12/Apr/19 14:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-482587051
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1035 | trunk passed |
   | +1 | compile | 28 | trunk passed |
   | +1 | mvnsite | 27 | trunk passed |
   | +1 | shadedclient | 650 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 22 | dist in the patch failed. |
   | +1 | compile | 18 | the patch passed |
   | +1 | javac | 18 | the patch passed |
   | +1 | mvnsite | 21 | the patch passed |
   | -1 | shellcheck | 2 | The patch generated 2 new + 0 unchanged - 1 fixed = 
2 total (was 1) |
   | +1 | shelldocs | 14 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 697 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 2753 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  mvnsite  unit  shadedclient  xml  |
   | uname | Linux 3ac617fd7f4f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abace70 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 226676)
Time Spent: 0.5h  (was: 20m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=226674=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-226674
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 12/Apr/19 14:05
Start Date: 12/Apr/19 14:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r274920281
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-all.sh
 ##
 @@ -0,0 +1,27 @@
+#!/usr/bin/env bash
+
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
+ALL_RESULT_DIR="$SCRIPT_DIR/result"
+
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*"
+
+RESULT=0
+IFS=$'\n'
+# shellcheck disable=SC2044
 
 Review comment:
   shellcheck:20: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 226674)
Time Spent: 10m
Remaining Estimate: 0h

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. 
> execute_robot_test or stop_docker_env) would also be simplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-04-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=226675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-226675
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 12/Apr/19 14:05
Start Date: 12/Apr/19 14:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r274920290
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-single.sh
 ##
 @@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Single test executor, can start a single robot test in any running container.
+#
+
+
+COMPOSE_DIR="$PWD"
+export COMPOSE_DIR
+
+if [[ ! -f "$COMPOSE_DIR/docker-compose.yaml" ]]; then
+echo "docker-compose.yaml is missing from the current dir. Please run this 
command from a docker-compose environment."
+exit 1
+fi
+if (( $# != 2 )); then
+cat << EOF
+   Single test executor
+
+   Usage:
+
+ ../test-single.sh  
+
+container: Name of the running docker-compose container 
(docker-compose.yaml is required in the current directory)
+
+robot_test: name of the robot test or directory relative to the 
smoketest dir.
+
+
+
+EOF
+
+fi
+
+# shellcheck source=testlib.sh
 
 Review comment:
   shellcheck:1: note: Not following: testlib.sh: openBinaryFile: does not 
exist (No such file or directory) [SC1091]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 226675)
Time Spent: 20m  (was: 10m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell