[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860284#comment-16860284
 ] 

Eric Yang commented on HDDS-1458:
-

[~arp] Do you have ${user.name}/ozone:0.5.0-SNAPSHOT docker image locally?  If 
the image doesn't exist, then Blockade test may become stuck.  Please make sure 
that:

{code}
mvn -f pom.ozone.xml clean verify -DskipShade -DskipTests 
-Dmaven.javadoc.skip=true -Pdist,docker-build,it
{code}

Docker-build maven profile is the trigger to make sure the required docker 
image exists to avoid the system getting stuck.  I like to get docker-build 
changed to docker to shorten the command to type, but that will happen in 
HDDS-1565.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860227#comment-16860227
 ] 

Arpit Agarwal commented on HDDS-1458:
-

Looks like the docker-compose command is stuck:
{code}
  502 22487 21238   0 10:20AM ttys0010:00.13 docker-compose -f 
/hadoop/hadoop-ozone/fault-injection-test/network-tests/target/docker-compose.yaml
 up -d --scale datanode=3
  502 22488 22487   0 10:20AM ttys0010:00.36 docker-compose -f 
/hadoop/hadoop-ozone/fault-injection-test/network-tests/target/docker-compose.yaml
 up -d --scale datanode=3
{code}

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860169#comment-16860169
 ] 

Arpit Agarwal commented on HDDS-1458:
-

Unfortunately these tests don't work for me. The test hangs forever at:
{code}
[INFO] No dependencies found.
[INFO] Wrote classpath file 
'/Users/agarwal/src/hadoop/hadoop-ozone/fault-injection-test/network-tests/target/classpath'.
[INFO]
[INFO] --- build-helper-maven-plugin:1.9:attach-artifact 
(attach-classpath-artifact) @ hadoop-ozone-network-tests ---
[INFO]
[INFO] --- maven-jar-plugin:2.5:test-jar (default) @ hadoop-ozone-network-tests 
---
[INFO] Building jar: 
/Users/agarwal/src/hadoop/hadoop-ozone/fault-injection-test/network-tests/target/hadoop-ozone-network-tests-0.5.0-SNAPSHOT-tests.jar
[INFO]
[INFO] --- exec-maven-plugin:1.3.1:exec (default) @ hadoop-ozone-network-tests 
---
{code}

The docker daemon is running. This is a regression from earlier revisions of 
the patch.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857929#comment-16857929
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] Thanks for the commit.  I think you committed patch 17.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857540#comment-16857540
 ] 

Hudson commented on HDDS-1458:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16687 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16687/])
HDDS-1458. Create a maven profile to run fault injection tests. (elek: rev 
f7c77b395f6f01b4e687b9feb2e77b907d8e943f)
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/conftest.py
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (delete) hadoop-ozone/dist/src/main/blockade/README.md
* (delete) hadoop-ozone/dist/src/main/blockade/test_blockade_flaky.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_mixed_failure_three_nodes_isolate.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_scm_isolation.py
* (delete) 
hadoop-ozone/dist/src/main/blockade/test_blockade_datanode_isolation.py
* (delete) hadoop-ozone/dist/src/main/blockade/ozone/cluster.py
* (delete) 
hadoop-ozone/dist/src/main/blockade/test_blockade_mixed_failure_two_nodes.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/cluster.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md
* (delete) hadoop-ozone/dist/src/main/blockade/conftest.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/__init__.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/compose/docker-compose.yaml
* (delete) hadoop-ozone/dist/src/main/blockade/ozone/__init__.py
* (edit) pom.ozone.xml
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_client_failure.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/cluster_utils.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/blockadeUtils/blockade.py
* (delete) hadoop-ozone/dist/src/main/blockade/clusterUtils/__init__.py
* (add) hadoop-ozone/.gitignore
* (delete) hadoop-ozone/dist/src/main/blockade/test_blockade_client_failure.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/util.py
* (add) hadoop-ozone/fault-injection-test/pom.xml
* (delete) hadoop-ozone/dist/src/main/blockade/test_blockade_mixed_failure.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_mixed_failure_two_nodes.py
* (delete) hadoop-ozone/dist/src/main/blockade/util.py
* (delete) hadoop-ozone/dist/src/main/blockade/test_blockade_scm_isolation.py
* (edit) hadoop-ozone/pom.xml
* (delete) 
hadoop-ozone/dist/src/main/blockade/test_blockade_mixed_failure_three_nodes_isolate.py
* (delete) hadoop-ozone/dist/src/main/blockade/blockadeUtils/__init__.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_datanode_isolation.py
* (delete) hadoop-ozone/dist/src/main/blockade/blockadeUtils/blockade.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_flaky.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/compose/docker-config
* (delete) hadoop-ozone/dist/src/main/blockade/clusterUtils/cluster_utils.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/test_blockade_mixed_failure.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/blockadeUtils/__init__.py
* (add) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/__init__.py
* (add) hadoop-ozone/fault-injection-test/network-tests/pom.xml


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857163#comment-16857163
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 27m  
9s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
8s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 43m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 23m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
14s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
15s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 10s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
34s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}299m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
|   | hadoop.ozone.om.TestOzoneManagerHA |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 base: 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856940#comment-16856940
 ] 

Eric Yang commented on HDDS-1458:
-

[~nandakumar131] found a issue in the rebase for cluster.py.  Patch 17 fixed 
this issue.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856748#comment-16856748
 ] 

Elek, Marton commented on HDDS-1458:


+1 LGTM

Will commit it soon.

Thanks you the continuous improvements [~eyang]

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855974#comment-16855974
 ] 

Eric Yang commented on HDDS-1458:
-

The failed unit test is not related to this patch.  Any more concerns [~elek]?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855275#comment-16855275
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
56s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
6s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
12s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
13s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}173m 53s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}377m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855100#comment-16855100
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] 
{quote}1. please use _file_ instead of os.getcwd{quote}

Use script to discover dependent file location is a poor design choice.  This 
type of design has been listed as a [software bugs with significant 
consequences|https://en.wikipedia.org/wiki/List_of_software_bugs] in reference 
38 and 49.  The risk increases overtime when new developers refactor the code 
or use a different calling method than originally designed.  I have given more 
warnings than I should have in this area, and encourage you to put some 
thinking to remove this type of coding practice in Ozone code base.  Patch 16 
uses script location as reference to find docker compose file instead of 
current directory.

{quote}2. please don't add any new docker-compose files to the dist 
package{quote}

This is fixed in patch 16.

{quote}Can you please also remove this fragment from the 
dependencyManagement:{quote}

Patch 16 fixed this issue.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854492#comment-16854492
 ] 

Elek, Marton commented on HDDS-1458:


Can you please also remove this fragment from the dependencyManagement:

{code}
+  
+org.apache.hadoop
+hadoop-ozone-dist
+tar.gz
+${ozone.version}
+  
{code}

We don't have tar.gz artifact from hadoop-ozone-dist (yet!). I think it's 
confusing to add it (now).

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-06-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854490#comment-16854490
 ] 

Elek, Marton commented on HDDS-1458:


Thanks the answer [~eyang]. Unfortunately my two concerns are still not 
addressed (but I may miss sg)

TLDR:

 1. please use __file__ instead of os.getcwd
 2. please don't add any new docker-compose files to the *dist* package (but 
feel free to move any existing one to an other location inside the dist folder 
*without* modification.

Would you be so kind to address these comments?

In more details:

(1) Before this patch:

{code}
#works
python -m pytest -s  tests/blockade/

#works
cd tests/blockade
python -m test -s .
{code}

After this patch:

{code}
#works
python -m pytest -s  tests/blockade/

#FAILED
cd tests/blockade
python -m test -s .
{code}

As the original version supports more use cases (not just the documented one 
but other ones which are used during CI/CD) I would prefer to keep the original 
one.

Any anyway: this issue is about "Create a maven profile to run fault injection 
tests". I think this change is unrelated to this goal (for example this goal 
can be achieved by my PR without this modification).

I tried to check it with symlinks but it worked well. If you can help to 
reproduce the problem please help and create a separated issue where the 
problem is clearly specified the steps re defined to reproduce problem. It 
would be a great help.

2. After the build I can see the new docker-compose files under 
hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/compose

They can't be used in any way from the dist package. Can you please remove them?



> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-31 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853421#comment-16853421
 ] 

Eric Yang commented on HDDS-1458:
-

Rebased patch 015 to current trunk.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852376#comment-16852376
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] 

Patch 14 keeps ozoneblockade files in dist.  There is a separate set of compose 
file which runs in maven verify using ozone image.  This helps to keep 
hadoop-runner and apache/ozone images separated and both can be tested.

{code}os.path.dirname(os.path.dirname(os.path.realpath(__file__)){code}

Os.path.realpath has some limitations in [recursive 
symlink|https://bugs.python.org/issue11397] and directory prefix started with 
~, and [change directory|https://bugs.python.org/issue24670].  We can not 
guarantee that user doesn't create symlink to OZONE_HOME, nor we can guarantee 
that user doesn't expand ozone tarball in user's home directory with symlinks.  
Pytest uses os.chdir to create temp directory for report generation.  The 
chance of running into problem is much higher using os.path.realpath(__file__) 
in permutations that were not thought out.

Many of the issues are only addressed in Python 3.4+.  Given that we are 
working with older version of python because of pytest and blockade.  The fixes 
are not available in python2.7.  This is my reasoning to use os.getcwd() as 
OZONE_HOME reference.  I admit that getcwd() may inconvenient individual that 
already developed the habit of running python code from tests/blockade 
directory.  However, that was never in the documentation, and there is a 
shorter route to use mvn clean verify -Pit to run the tests.  May I suggest to 
keep using getcwd() until we can moved to newer version of python, pytest and 
blockade?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851944#comment-16851944
 ] 

Elek, Marton commented on HDDS-1458:


Thanks the update [~eyang] and I apologize if my previous comment was not clear 
enough.

Please keep all the docker-compose file of the dist package in the current 
form. If you think it's better to *move* the compose/ozoneblockade to 
test/blockade/compose, I am fine with it, but please don't modify the content 
(It was not modified at my previous review but it was modified somewhere 
between the patch 10 and patch 12).

And please keep the original method to find the docker-compose files *as a 
default* option:

To be more precious:

I would suggest to use:

{code}
if "MAVEN_TEST" in os.environ:
  compose_dir = environ.get("MAVEN_TEST")
  FILE = os.path.join(compose_dir, "docker-compose.yaml")
elif "OZONE_HOME" in os.environ:
  compose_dir = environ.get("OZONE_HOME")
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
else:
  parent_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))+  
FILE = os.path.join(parent_dir, "compose", "ozone", \
 "docker-compose.yaml")
{code}

Instead of

{code}
if "MAVEN_TEST" in os.environ:
  compose_dir = environ.get("MAVEN_TEST")
  FILE = os.path.join(compose_dir, "docker-compose.yaml")
elif "OZONE_HOME" in os.environ:
  compose_dir = environ.get("OZONE_HOME")
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
else:
  compose_dir = os.getcwd()
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
{code}

Because it's less error-prone. It's 100% compatible with the documentation but 
it enables to run the blockade test from different dirs which help us to 
simplify CI jobs.

Thanks a lot.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850312#comment-16850312
 ] 

Eric Yang commented on HDDS-1458:
-

The license errors are false positive, and caused by HADOOP-16323.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850293#comment-16850293
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 23m  
4s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
9s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
10s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}180m 17s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
13s{color} | {color:red} The patch generated 17 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}400m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdds.scm.block.TestBlockManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850018#comment-16850018
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}I didn't say that os.getcwd should be removed. I asked to keep the more 
generic solution (which was used until now) as the default third option. 
Nothing more.{quote}

Os.path.realpath is not a more generic solution.  It finds the script location, 
but script can be moved around the tarball which cause the relative location of 
compose files to change.MAVEN_TEST OZONE_HOME, and os.getcwd() are all 
making the same logic assertion to discover the base directory of OZONE project 
where docker compse file is placed in a subdirectory from ozone project.  
Os.getcwd() is more inline with the documentation that the script is to be 
executed from OZONE_HOME.  Environment variable assisted discovering location 
of OZONE_HOME is icy on the cake for downstream developer that creates RPM or 
docker images and setup environment variables in /etc/profile.d to discover 
OZONE_HOME location, and the scripts can be moved to /usr/bin or destination of 
their choice.  The scripts are also more flexible to move their location within 
the tarball later on by changing python command execution path in documentation.

{quote}We also agreed that all of the tests should be supported in both 
ways.{quote}

I do not fully agree to this personally.  As I have said earlier to keep 
existing tests in release tarball is unfortunate choice that we have to live 
with.  I don't like steering more integration test artifacts into release 
tarball is the right direction.  It only creates messy situation and over claim 
integration test code can be used for a different purpose as a check engine 
indicator for actual distributed cluster.  This will only create frustrations 
for Ozone admin who wants to depend on Ozone code, but can't.

{quote}I think it's a good consensus, let's modify the patch according to this. 
Can you please keep the docker-compose files in the dist in the original 
form?{quote}

The files were removed from tarball based on your feedback.  They are restored 
again based on your feedback in patch 14.  From my point of view, the test can 
work with compose files from ozone or ozoneblockade.  I thought you wanted to 
remove duplication, but I could be wrong.  What was the reason that you 
suggested to remove them in the first place, and rationale for restore them?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-28 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849397#comment-16849397
 ] 

Elek, Marton commented on HDDS-1458:


Thank you very much to explain it [~eyang]

1. os.getcwd is the third option in your if condition. I think all of your use 
cases can be supported by OZONE_HOME or MAVEN_TEST. 

bq.  The patched code is written to support the most frequent scenarios rather 
than making only one assumption that the script is always fixated to the 
relative location of docker compose file in binary release tarball

I didn't say that os.getcwd should be *removed*. I asked to keep the more 
generic solution (which was used until now) as the default third option. 
Nothing more.

bq. Feel the conversation has not been fruitful with you insist on doing 
everything the old way, which has been identified as none-salable approach

Yes, I agree that it's a too long conversation, and not th most effective one. 
Fortunately we had the following consensus:

* Keep the current approach of running tests and current distribution practice 
as is.
* Support an other method (which is proposed by you) as part of the maven 
build. (approach 1: running the tests from the dist with hadoop-runner, 
approach 2: running tests from the maven based on pre-built images)
* We also agreed that all of the tests should be supported in both ways.

I think it's a good consensus, let's modify the patch according to this. Can 
you please keep the docker-compose files in the dist in the original form?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849256#comment-16849256
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] 1. The purpose of this change is to illustrate the frequent use cases 
that the script may be moved to a location other than the relative structure of 
OZONE_HOME.  For example, source code location which is different from binary 
package.  The patched code is written to support the most frequent scenarios 
rather than making only one assumption that the script is always fixated to the 
relative location of docker compose file in binary release tarball.  Ozone 
0.4.0 has incorrectly documented how to run the script the proper way.  
Therefore, the code has been corrected to match documentation.  I do not agree 
to remove os.getcwd() is the right solution.  Your subconscious mind insisted 
on undocumented behavior:

{code}
cd $OZONE_HOME/blockade
python -m pytest -s .
{code}

However, this is not what is documented even through README is located in the 
blockade location.  The documented procedure is going to OZONE_HOME, which is 
top level of Ozone tarball.  Therefore, the code is more accurately reflected 
using getcwd() as OZONE_HOME to locate compose file.  For any reason if 
blockade test is moved again in tarball structure, getcwd() as OZONE_HOME is 
more accurately referencing compose file location.  Script path approach is 
actually less optimal because any later decision to change python script 
location will result in code changes to discovering new relative location of 
compose file location.  It is also very common for package maintainer to move 
scripts into /usr/bin, and use environment variable to locate rest of the 
binaries.  With current code, it is less work to maintain script location 
change IMHO.

2. {quote}You don't need to set both as setting MAVEN_TEST is enough. I would 
suggest to remove one of the environment variables.{quote}

I know that.  It is just for convenience to the reader that both can be 
supported and MAVEN_TEST takes precedence over OZONE_HOME for new developers.

3. {quote}BTW, I think it would better to use one environment variable: 
FAULT_INJECTION_COMPOSE_DIR. Using OZONE_HOME or MAVEN_TEST we don't have the 
context how is it used (in fact just to locate the compose files). Using more 
meaningful env variable name can help to understand what is the goal of the 
specific environment variables.{quote}

We don't want to make one extra environment variable just for one purpose.  
When it is possible to use one variable to solve multiple problems, then it is 
worth the time to define the variable name.  For example, to figure out where 
the location of the program.  It is worth the time to define JAVA_HOME, or 
OZONE_HOME.  FAULT_INJECTION_COMPOSE_DIR is same as rest of the compose file 
used in release tarball, then it is a waste of time and labor to define this 
unique name.

4. {quote}The content of the docker-compose files in 
hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/compose are changed. As we 
agreed multiple times, we should keep the current content of the docker-compose 
files. Please put back the volumes to the docker-compose files and please use 
apache/hadoop-runner image.{quote}

The removal of ozoneblockade compose file is based on [your 
comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16845291]
 point 1.  I also confirmed that the tests can run with global docker compose 
file in release tarball.  I do not understand why you insist on doing 
development style docker image with release tarball.  The current patch made no 
change to global docker compose file.  I am confused by your statement about 
put volumes back in docker-compose-files or apache/hadoop-runner image.

I feel the conversation has not been fruitful with you insist on doing 
everything the old way, which has been identified as none-salable approach.  I 
feel like chasing you in circles did not show a bit of productivity over the 
length conversations.  Please help with a break through in the conversation 
rather than insist on going back on the broken model when I did nothing to 
break the broken model and followed your words accurately.

5. {quote}hadoop-ozone/dist depends from the network-tests as it copies the 
files from the network-tests. This is not a hard dependency as of now as we 
copy the files directly from the src folder (build is not required) but I think 
it's more clear to add a provided dependency to hadoop-ozone/dist to show it's 
dependency. (As I remember you also suggested to use maven instead of direct 
copy from dist-layout stitching){quote}

The dissected conversation is to move docker build and maven assembly change to 
HDDS-1495.  It seems that assembly and dependency issue will not be addressed 
unless the code are 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-27 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848659#comment-16848659
 ] 

Elek, Marton commented on HDDS-1458:


Thank you very much the update [~eyang]

1. 

bq. In maven environment, it will try to use docker compose file in 
${basedir}/target because it is localized with additional properties like 
version number. The script path does not tell us where localized version of 
docker compose file is relative to the script. This is the reason that the 
script computes the current working directory for maven environment.

As I wrote earlier the current working directory is not as flexible as the 
earlier solutions. For example you can do:

{code}
python -m pytest -s tests/blockade 
{code}

But you can't do

{code}
cd tests/blockade
python -m pytest -s .
{code}

Earlier it was supported and you don't need to do any special just keep the old 
method and use
 
{code}
os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
{code}

instead of 

{code}
os.getcwd()
{code}

Can you please do that?

2. Thank you very much to explain the role of MAVEN_TEST and OZONE_HOME. My 
question was regarding to the hadoop-ozone/network-tests/pom.xml

{code}
  

  ${ozone.home}


  ${project.build.directory}

  
{code}

You don't need to set *both* as setting MAVEN_TEST is enough. I would suggest 
to remove one of the environment variables.

3. BTW, I think it would better to use one environment variable: 
FAULT_INJECTION_COMPOSE_DIR. Using OZONE_HOME or MAVEN_TEST we don't have the 
context how is it used (in fact just to locate the compose files). Using more 
meaningful env variable name can help to understand what is the goal of the 
specific environment variables.

4. The content of the docker-compose files in 
hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/compose are changed. As we 
agreed multiple times, we should keep the *current* content of the 
docker-compose files. Please put back the volumes to the docker-compose files 
and please use apache/hadoop-runner image.

5. hadoop-ozone/dist depends from the network-tests as it copies the files from 
the network-tests. This is not a hard dependency as of now as we copy the files 
directly from the src folder (build is not required) but I think it's more 
clear to add a provided dependency to hadoop-ozone/dist to show it's 
dependency. (As I remember you also suggested to use maven instead of direct 
copy from dist-layout stitching)

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847649#comment-16847649
 ] 

Eric Yang commented on HDDS-1458:
-

The failed unit tests are not related to patch 013.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847258#comment-16847258
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
4s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 23m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
9s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m  
9s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}160m 45s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}362m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847137#comment-16847137
 ] 

Eric Yang commented on HDDS-1458:
-

[~arp] yes.  Patch 13 removes the dependency for now until HDDS-1495 creates 
the tarball using assembly plugin in maven repository cache, then the 
dependency can be established.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847114#comment-16847114
 ] 

Arpit Agarwal commented on HDDS-1458:
-

[~eyang] helped me debug this offline. Looks like this error was introduced by 
the following dependency:
{code}
 
   org.apache.hadoop
   hadoop-ozone-dist
   tar.gz
 
{code}

Can we leave this dependency out for now?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847048#comment-16847048
 ] 

Arpit Agarwal commented on HDDS-1458:
-

I get the following error with the v12 patch:

{code}
~/ozone/hadoop-ozone/fault-injection-test/network-tests$ mvn verify -Pit
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
[INFO] Scanning for projects...
[INFO]
[INFO] < org.apache.hadoop:hadoop-ozone-network-tests >
[INFO] Building Apache Hadoop Ozone Network Tests 0.5.0-SNAPSHOT
[INFO] [ jar ]-
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  1.419 s
[INFO] Finished at: 2019-05-23T13:58:08-07:00
[INFO] 
[ERROR] Failed to execute goal on project hadoop-ozone-network-tests: Could not 
resolve dependencies for project 
org.apache.hadoop:hadoop-ozone-network-tests:jar:0.5.0-SNAPSHOT: Failure to 
find org.apache.hadoop:hadoop-ozone-dist:tar.gz:0.5.0-SNAPSHOT in 
https://repository.apache.org/content/repositories/snapshots was cached in the 
local repository, resolution will not be reattempted until the update interval 
of apache.snapshots.https has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
{code}

Was not seeing this before. I've already done a clean build+install before 
running _mvn verify_.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846341#comment-16846341
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
1s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m  
9s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
10s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
11s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}327m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServices |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
|   

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846289#comment-16846289
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
6s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 24m 
32s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
56s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
57s{color} | {color:red} fault-injection-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
51s{color} | {color:red} network-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 20m 
12s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 20m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  4m 
44s{color} | {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
11s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
11s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  7m 
59s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 28s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846198#comment-16846198
 ] 

Eric Yang commented on HDDS-1458:
-

Patch 012 address 1, 2, 4, 5, 6, and 7 in [~elek]'s [previous 
comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16845291]
 with assumption that removing ozoneblockade compose files.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846131#comment-16846131
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote}Can you please delete the original one from the 
compose/ozoneblockade dir of the dirst tar.{quote}

Sorry, I am confused about this ask.  compose/ozoneblockade dir is already 
removed from dist project.  Do you want me to remove compose/ozoneblockade from 
tarball too?  If yes, this implies that python script will default to use 
$OZONE_HOME/compose/ozone as source for yaml files for testing purpose.  If no, 
please clarify.

For clarity on locating docker-compose file:
{code}
if "MAVEN_TEST" in os.environ:
  compose_dir = environ.get("MAVEN_TEST")
  FILE = os.path.join(compose_dir, "docker-compose.yaml")
elif "OZONE_HOME" in os.environ:
  compose_dir = environ.get("OZONE_HOME")
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
else:
  compose_dir = os.getcwd()
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
{code}

# MAVEN_TEST is self explaintory, to located docker compose in maven build 
directory.
# OZONE_HOME is used, if binary executable is symlink to a location such as 
/usr/bin.  Where PATH variable can find the script via symlink location, but 
OZONE_HOME variable is required to assist locating compose file.
# If system environment is not configured, there is no OZONE_HOME variable 
defined, then look for compose file from the current location.  This style is 
used for expanding tarball, and run directly.

Hope this explains the three type of differences that the script is improved to 
support.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846089#comment-16846089
 ] 

Eric Yang commented on HDDS-1458:
-

Moved start-build-env.sh improvement to HADOOP issue.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845508#comment-16845508
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}Can you please delete the original one from the compose/ozoneblockade 
dir of the dirst tar.{quote}

Sure.

{quote}Can you please remove the -dev compose file? It can be confusing and 
this is required only during the maven builds (In the current form the 
dist-layout-stitching lines are duplicated. Fix me If I am wrong but the 
compose files are copied twice{quote}

The two compose files are structured to handle in maven and in tarball 
environment independently.  With the recent changes to docker image build 
process, we can remove -dev compose file.

{quote}Please use path which is relative to the py file instead of 
getcwd:{quote}

In maven environment, it will try to use docker compose file in 
${basedir}/target because it is localized with additional properties like 
version number.  The script path does not tell us where localized version of 
docker compose file is relative to the script.  This is the reason that the 
script computes the current working directory for maven environment.

{quote}tests prefix is missing from README.md L38{quote}

Will check.

{quote} I didn't have time yet to understood ./start-build-env.sh changes. But 
it may be better to do in a HADOOP jira to get more visibility. If I understood 
well it's not directly required but it's a generic improvements to make it 
possible to run the ozone tests in the build docker containers. What do you 
think?{quote}

Ok, then we will need that patch check in before this one for maven verify to 
work.

{quote}Can you please help me to understand why do we need to set OZONE_HOME in 
hadoop-ozone/fault-injection-test/network-tests/pom.xml. The MAVEN_HOME seems 
to be enough.{quote}

That is to make the script more proper, in the event that we want to run the 
python script, and the docker compose files are not placed in the same location 
as python script, it gives us a frame of reference to locate the required 
files.  I made a habit of using _HOME variable to look up relative path, in 
case if we decided to move compose file to share/ozone/compose, it will be 
easier to make reference change without depend on python script relative 
location.

{quote} Not clear the expected relationship between the network-tests and dist. 
Can we add a provided dependency between them to make it explicit?{quote}

They are independent of each other, there is no need to make explicit 
dependencies.  If we can do some more improvement to the docker compose files, 
we might be able to do parallel build to speed up the time that it takes to run 
fault injection tests: mvn -T 4 clean verify -Pit


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-21 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845291#comment-16845291
 ] 

Elek, Marton commented on HDDS-1458:


Thank you very much the update [~eyang]

As I see the blockade compose files are moved from compose/ozoneblockade to 
tests/compose. I am fine with this change, but:

1. Can you please delete the original one from the compose/ozoneblockade dir of 
the dirst tar.

2. Can you please remove the -dev compose file? It can be confusing and this is 
required only during the maven builds (In the current form the 
dist-layout-stitching lines are duplicated. Fix me If I am wrong but the 
compose files are copied twice).

3. Please use path which is relative to the py file instead of getcwd:

{code}
os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
{code}

instead of 

{code}
os.getcwd()
{code}

The later one depends from the current directory. The original one makes it 
easy to run the tests from any directories.

4. tests prefix is missing from README.md L38

5. I didn't have time yet to understood ./start-build-env.sh changes. But it 
may be better to do in a HADOOP jira to get more visibility. If I understood 
well it's not directly required but it's a generic improvements to make it 
possible to run the ozone tests in the build docker containers. What do you 
think? 

6. Can you please help me to understand why do we need to set OZONE_HOME in 
hadoop-ozone/fault-injection-test/network-tests/pom.xml. The MAVEN_HOME seems 
to be enough.

7. Not clear the expected relationship between the network-tests and dist. Can 
we add a provided dependency between them to make it explicit?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-21 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845089#comment-16845089
 ] 

Arpit Agarwal commented on HDDS-1458:
-

bq. HDDS-1563 opened for resolving TTY error.
Thanks [~eyang], I will take a crack at fixing it.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1680#comment-1680
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m  
4s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
4s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 37m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
2s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
8s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m  
8s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
40s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 40s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 2s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2700/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844363#comment-16844363
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2700/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844362#comment-16844362
 ] 

Eric Yang commented on HDDS-1458:
-

Patch 10 added .gitignore file to exclude blockade cache and python bytecode 
files.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844357#comment-16844357
 ] 

Eric Yang commented on HDDS-1458:
-

[~arpitagarwal] HDDS-1563 opened for resolving TTY error.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844269#comment-16844269
 ] 

Arpit Agarwal commented on HDDS-1458:
-

The freon error is not seen when running tests with _{{python -m pytest -s 
blockade/...}}_

I'll file a separate issue to figure out the _the input device is not a TTY_ 
error. We should probably fix it before we proceed with this change, else we 
will not be able to run blockade tests any more.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844255#comment-16844255
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
22s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
3s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
10s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
10s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
25s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 40s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2699/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844195#comment-16844195
 ] 

Arpit Agarwal commented on HDDS-1458:
-

We probably need to figure out the freon issue first, we can do it in a 
separate Jira though.

I don't recall if this freon error is seen while running blockade tests using 
_python -m_. I will try it out.

A separate minor point - after the run I see numerous untracked files. We 
probably need to update _.gitignore_.
{code}
Untracked files:
  (use "git add ..." to include in what will be committed)

hadoop-ozone/fault-injection-test/network-tests/.blockade/

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/.cache/

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/__pycache__/

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/blockadeUtils/__init__.pyc

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/blockadeUtils/blockade.pyc

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/__init__.pyc

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/clusterUtils/cluster_utils.pyc

hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/conftest.pyc
{code}

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844117#comment-16844117
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2699/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842825#comment-16842825
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  7m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
29s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
4s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
8s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m  
9s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
16s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 33s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2698/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-17 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842817#comment-16842817
 ] 

Eric Yang commented on HDDS-1458:
-

[~arpitagarwal] Some of the commands run by freon is expecting an input, but I 
see there is no pseudo-terminal setup for freon to call out to ozone commands.  
Some code in freon needs to update to setup 
[pseudo-terminal|https://stackoverflow.com/questions/41542960/run-interactive-bash-with-popen-and-a-dedicated-tty-python].
  Can we have that improvement track by another issue?

Patch 8 made the following changes:
# Fixed start-build-env.sh change to work properly on MacOSX,  (Found by [~jnp])
# Removed disk tests. (requested by [~elek]

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-17 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842625#comment-16842625
 ] 

Arpit Agarwal commented on HDDS-1458:
-

I was able to get the tests to run with the v7 patch. However most tests failed 
with the following error:
{code}
>   assert exit_code == 0, "freon run failed with output=[%s]" % output
E   AssertionError: freon run failed with output=[the input device is not a 
TTY]
E   assert 1 == 0
{code}
Not sure if freon has some assumption about stdin being a tty.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842598#comment-16842598
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2698/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-17 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842408#comment-16842408
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote} I prefer to do it one step. Better to add something when it can 
be tested how does it work. If it's not possible now, because there are now 
tests yet, let's modify only the blockade part without the disk tests. I assume 
there will be a new jira which explains the plan with the disk based 
fault-injection tests.{quote}

Ok, I'll removed disk-tests submodules from this issue.  Disk related test is 
opened as HDDS-1554.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-17 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842380#comment-16842380
 ] 

Elek, Marton commented on HDDS-1458:


bq. I will make the new tests in the tarball when tests are written

I prefer to do it one step. Better to add something when it can be tested how 
does it work. If it's not possible now, because there are now tests yet, let's 
modify only the blockade part without the disk tests. I assume there will be a 
new jira which explains the plan with the disk based fault-injection tests.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841818#comment-16841818
 ] 

Eric Yang commented on HDDS-1458:
-

[~anu] {quote}Sorry this makes no sense. If you want simulate disk failures, it 
is easier with a container based approach and it really does not matter where 
the containers run. {quote}

Some disk tests can not be simulated without distributing IO to multiple disks. 
 A single node will have all containers running from the same disk.  It is 
harder to simulate isolated fault injection to scm disk when all containers 
share the same physical disk IMHO, but the conversation is digressing from 
layout of current tool chain.  

I will make the new tests in the tarball when tests are written.  I don't have 
a way to do that in this patch because this patch only setup the module layout. 
 In the follow up patch, I will worry about race condition between compiling 
test code and packaging the test code in tarball.  This means junit based java 
test can not be used.  I am less thrill to look at test reports on terminal 
console than looking them through Jenkins, but tests can be part of tarball.  
Maven and Robot Test framework can do exactly the same thing for keyword based 
tests.  I don't know why we choose to do both when Maven is already used daily. 
 I am not going to interrupt the plan and will make the tests to support both 
maven and robots framework.  Does this sound fair?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841790#comment-16841790
 ] 

Anu Engineer commented on HDDS-1458:


{quote}The current test does not spawn real cluster. The current approach has 
two limitations that tie the tests into a single node.
{quote}
It simulates a cluster via launching a set of containers. From Ozone's 
perspective is a cluster.
{quote}Blockade [does not support docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html] to test real 
distributed cluster.
{quote}
Again you misunderstood me, that is not what I am saying, I am saying from 
Ozone's point of view, a set of containers looks and feels like a set of 
processes running on different machines.
{quote}This is the reason that I was not planning to expose the disk tests 
using the current design beyond development environment.

 
{quote}
Sorry this makes no sense. If you want simulate disk failures, it is easier 
with a container based approach and it really does not matter where the 
containers run. I would suggest that you make this running on a single machine 
first, and allowing the current tool chain to work. Thanks

 

 

 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841788#comment-16841788
 ] 

Eric Yang commented on HDDS-1458:
-

[~anu] The current test does not spawn real cluster.  The current approach has 
two limitations that tie the tests into a single node.  1.  ozone binary is not 
in docker container image.  There is no mechanism to distribute ozone binary to 
real cluster.  2.  Blockade [does not support docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html] to test real 
distributed cluster.  The current design can work on a single node with 
multiple containers.  We opened HDDS-1495 to address docker image issue to 
ensure that we can distribute docker images to multiple nodes for the real test 
to happen.  This is the reason that I was not planning to expose the disk tests 
using the current design beyond development environment.  I will put the tests 
in tarball, when that content is written.

[~elek] {quote}Can you please update the patch with support this with the new 
tests as well?{quote}

The content of disk tests is a follow up patch, and I will add the scripts to 
tarball when the time comes.  Does this work for you?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841770#comment-16841770
 ] 

Anu Engineer commented on HDDS-1458:


While applauding your valiant efforts to reduce manual steps; I have a current 
workflow that works very well. I depend on python and robots tests for 
end-to-end verification; which launches real clusters and tries out all kinds 
of commands. This workflow is critical to me today. So unless the new way and 
the old way are identical in the tests they run and also has the ability to 
stay that way in foreseeable future, let us have one consistent way. If I had 
to make a choice, I will choose what we have today.

It looks as if what we are asking is hard for you to do, and we are saying that 
after GA we might remove these tests; so asking you to do this extra work also 
does not seem to be fair. 

Do you think we should take a pause on this work for time being ? and explore 
this later in the cycle? 


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841742#comment-16841742
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m  
6s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
10s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
10s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
20s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 43s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2696/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841727#comment-16841727
 ] 

Elek, Marton commented on HDDS-1458:


We agreed to support both the approaches. The old way is to include all the 
tests (including the new and future tests) in the distribution package and make 
it possible to run them from the distribution package (in case of the blockade 
test, only with python without mvn).

Can you please update the patch with support this with the new tests as well?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841615#comment-16841615
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2696/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841489#comment-16841489
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}Why are we removing this? We want to have all tests in the release 
package up until GA.
"${ROOT}/hadoop-ozone/dist/src/main/blockade". It would be nice to have the 
ability to run the blockade tests as we do today so users can use either of 
mechanism to run the tests.{quote}

Blockade tests are not removed.  They are  moving to fault-injection-test 
module to improve organization of code and follow maven build life cycle 
instead of dumping everything in dist subdirectory.  The release tarball can 
still run blockade tests as we have in Ozone 0.4.0 release using:

{code}
cd $DIRECTORY_OF_OZONE
python -m pytest -s tests/blockade
{code}

This has been explained in [previous 
comments|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16831213=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16831213]
 and [this 
instructions|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16838941=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16838941]
 to perform integration-tests from maven directly to reduce manual steps.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841465#comment-16841465
 ] 

Anu Engineer commented on HDDS-1458:


Why are we removing this? We want to have all tests in the release package up 
until GA.
"${ROOT}/hadoop-ozone/dist/src/main/blockade". It would be nice to have the 
ability to run the blockade tests as we do today so users can use either of 
mechanism to run the tests.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16839564#comment-16839564
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}Can we support both the approaches?{quote}

Sure, but there is no value to run integration-test in release tarball IMHO.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-14 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16839321#comment-16839321
 ] 

Elek, Marton commented on HDDS-1458:


bq. The disk tests are only setting up structure to run inline in maven only. 

Can we support both the approaches?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838951#comment-16838951
 ] 

Eric Yang commented on HDDS-1458:
-

Looks like yetus does not understand flags: --findbugs-strict-precheck 
--jenkins --skip-dir, not related to this patch.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838946#comment-16838946
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
42s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  9s{color} 
| {color:red} Unprocessed flag(s): --findbugs-strict-precheck --jenkins 
--skip-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2687/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968613/HDDS-1458.007.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2687/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.11.0-SNAPSHOT http://yetus.apache.org |


This message was automatically generated.



> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838945#comment-16838945
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2687/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838943#comment-16838943
 ] 

Eric Yang commented on HDDS-1458:
-

Rebase to current trunk.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838942#comment-16838942
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDDS-1458 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968612/HDDS-1458.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2686/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.11.0-SNAPSHOT http://yetus.apache.org |


This message was automatically generated.



> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838941#comment-16838941
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] [~anu] Patch 006 will place blockade tests in 
$OZONE_HOME/tests/blockade.  Developer can run tests in maven using:

{code}mvn clean verify -Pit{code}

Released tarball can also run similar to existing instructions:

{code}python -m pytest -s tests/blockade/{code}

The disk tests are only setting up structure to run inline in maven only.  The 
actual disk tests will be developed in another issue.  We can decide if we want 
disk tests to be part of release tarball when that time comes.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838937#comment-16838937
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDDS-1458 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968611/HDDS-1458.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2685/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.11.0-SNAPSHOT http://yetus.apache.org |


This message was automatically generated.



> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838452#comment-16838452
 ] 

Elek, Marton commented on HDDS-1458:


{quote}Other smoketests base on docker-compose is also not easy to get them 
running with docker swarm for distributed test because Docker documentation has 
advice to [keep binary in docker image and remove Volume 
requirement|https://docs.docker.com/compose/production/] for production 
docker-compose to work with docker swarm. Hence, carrying any of the 
integration tests in release tarball would not achieved the discussed end goals 
from my point of view. I am in favor of defer the decision to carry tests in 
release binary tarball to prevent us running in circular discussions. For the 
moment, having a long path to start single node test for developer, is 
inconvenient but we are not leading general users to a dead end.
{quote}
I would prefer to move out this discussion as well as I don't think it's 
strongly related (we agreed to keep the support of the current structure)
 # The key word in the link is "production". And I agree: keep binary in docker 
image for production and use hadoop-runner for development. Running tests 
during the vote is not a production use case.
 # BTW, this is exactly the pattern what we follow. hadoop-runner is used for 
dev (which provides faster and more consistent builds) and we provide 
apache/ozone container and dev containres (k8s-dev profile) for real 
distributed clusters.
 # The definition of the smoke tests are split:
 ## robot framework defines small steps to execute commands and assert the 
results
 ## Bash scripts (test.sh) define where the robot framework tests should be 
executed
 # In kubernetes, the same robot tests can be executed with different test.sh 
(if the robot files are included in the distribution)
 # Agree, the blockade tests can't be executed in kubernetes (chaos tests 
should be implemented there), but I would prefer to keep them to make it easy 
to execute them in any single-node environments.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-13 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838450#comment-16838450
 ] 

Elek, Marton commented on HDDS-1458:


I created a Jira for the question of the directory structure: HDDS-1521

Let's move the discussion there and use the suggested one here. 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837902#comment-16837902
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}Let us drop the share and make it tests/blockade, and tests/smoketests 
etc. That way, all tests that we ship can be found easily. Otherwise I am +1 on 
this change.{quote}

It would be more proper to have a ozone command to wrap around python -m pytest 
-s share/ozone/tests/blockade that can be started on any directory than asking 
user to type the exact location and commands.  [~elek] explained to make test 
functionality part of the release for system admin to validate their 
environment is not aligned with the current tests capabilities.  Blockade tests 
are single node integration test because Blockade does not support [docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html].  Other 
smoketests base on docker-compose is also not easy to get them running with 
docker swarm for distributed test because Docker documentation has advice to 
keep binary in docker image and [remove Volume 
requirement|https://docs.docker.com/compose/production/] for production 
docker-compose to work with docker swarm.  Hence, carrying any of the 
integration tests in release tarball would not achieved the discussed end goals 
from my point of view.  I am in favor of defer the decision to carry tests in 
release binary tarball to prevent us running in circular discussions.  For the 
moment, having a long path to start single node test for developer, is 
inconvenient but we are not leading general users to a dead end.

{quote}I think It's time to revisit the current structure of ozone distribution 
tar files. The original structure is inherited from HADOOP-6255 which is 
defined for a project which includes multiple subproject (hadoop, yarn,...) and 
should support the creation of different RPM package from the same tar. I think 
for ozone we can improve the usability with reorganizing the dirs to a better 
structure.{quote}

The reason for HADOOP-6255 was more than just RPM packaging.  The motivation 
behind the reorganization was to make a directory structure that can work as 
standalone tarball as well as follow the general guideline for [Filesystem 
Hierarchy 
Standard|https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard].  This 
was proposed because a several people saw the need to make the structure more 
unix like for sharing dependencies for larger eco-system to work.  This is the 
reason that Hadoop HDFS, Mapreduce have good integration between projects to 
reduce shell script bootstrap cost.  Earlier version of YARN did not follow the 
conventional wisdom and it was hard to integrate with rest of Hadoop, YARN 
community struggled on classpath issue for at least 2+ years and the time to 
hype YARN framework had already passed.  Given there is a high probability that 
we want to make ozone as universal as possible for applications to integrate 
with us.  It given us more incentive to make the structure as flexible as 
possible.  This is only a advice from my own past scaring.  There are no 
perfect solution, but the conventional wisdom usually have endure test of time 
and save energy.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-11 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837775#comment-16837775
 ] 

Elek, Marton commented on HDDS-1458:


bq. Let us drop the share and make it tests/blockade, and tests/smoketests etc. 
That way, all tests that we ship can be found easily. Otherwise I am +1 on this 
change.

I agree with Anu. I think It's time to revisit the current structure of ozone 
distribution tar files. The original structure is inherited from HADOOP-6255 
which is defined  for a project which includes multiple subproject (hadoop, 
yarn,...) and should support the creation of different RPM package from the 
same tar. I think for ozone we can improve the usability with reorganizing the 
dirs to a better structure.

But I am fine to do it in a separated jira and keep it in the share/test until 
that to make progress.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837746#comment-16837746
 ] 

Anu Engineer commented on HDDS-1458:


bq. python -m pytest -s  share/ozone/tests/blockade

Let us drop the share and make it tests/blockade, and tests/smoketests etc. 
That way, all tests that we ship can be found easily. Otherwise I am +1 on this 
change.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837709#comment-16837709
 ] 

Eric Yang commented on HDDS-1458:
-

[~anu] [~jnp] [~elek] and myself were in a meeting and discovered the 
documentation did not reflect the current code base correctly with HDDS-1513, 
HDDS-1518, HDDS-1519, HDDS-1520 opened.  we agreed on the current layout of 
ozone tarball needs to be more compatible with Hadoop directory layout to 
prevent developers from writing inflexible code.  The proposal is to generate 
test artifacts in ozone-[version]/share/ozone/tests for fault injection test 
framework in release binary tarball.  

User will be able to run:
{code}
cd $DIRECTORY_OF_OZONE
python -m pytest -s  share/ozone/tests/blockade/
{code}

In source tarball, user can run from top level of ozone source code or inside 
fault-injection-test directory:

{code}
mvn clean verify -Pit
{code}

This issue is not going to handle limitation of hard coded uid/gid in 
hadoop-runner image.  Hence some developers may not be able to run blockade 
tests until that issues is addressed in HDDS-1513.  Please comment if I didn't 
capture this correctly.  Thanks

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-10 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837141#comment-16837141
 ] 

Elek, Marton commented on HDDS-1458:


bq.  We must have better separation between development/test artifacts and 
production binary image. In some enterprise, it is forbidden to have gcc or 
maven build system in production environment for security reasons. The current 
arrangement provides better organization from enterprise point of view.

I have detailed opinion about this topic, but please don't open a new thread. 
We agreed to use multiple methods for container creation during the build and 
we also agreed that _all_ the tests should be possible to be executed in both 
the ways. This is not yet part of the patch. Please keep the current  usage 
pattern.



> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835881#comment-16835881
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote}2. However you removed the blockade test from the release 
package. Can you please keep this functionality (to execute tests from 
distribution tar) as of now? And we agreed to support the execution of the 
tests in both way, I would suggest to put all the new tests to the tar file as 
well. Maybe we can create a new subdirectory for all the tests in the 
tar.{quote}

Blockade tests will spawn docker cluster by itself using maven as mechanism to 
start the bootstrap process.  As long as the smoke test functionality is in 
released source tarball.  The feature can test with locally build image or 
released docker hub image as intended.  We must have better separation between 
development/test artifacts and production binary image.  In some enterprise, it 
is forbidden to have gcc or maven build system in production environment for 
security reasons.  The current arrangement provides better organization from 
enterprise point of view.

{quote}I think it will simplify the build process as you don't need the 
maintain the dev compose files as they are already added to the distribution 
under the compose folder (even better, we can execute the fault injection tests 
in any of the provided environments: secure/not-secure, etc.){quote}

Patch 004 maintains two set of compose file because pytest test cases do not 
know where to retrieve ozone binary from dev environment unless using local 
relative path to reference ozone binaries.  The relative location of ozone 
binary is different between hadoop-ozone/dist/src/main/compose, and 
fault-injection project.  If the docker image is self contained, then we don't 
need to maintain two separate sets of docker-compose files.  For dev image, the 
relative path to ozone changes between submodule projects.  This is the reason 
that fault-injection-test project carries two set of docker-compose files.  If 
you agree that fault-injection-test only test with self-contained docker image, 
then I will remove dev version of docker compose files.

{quote}3. The docker-compose file of 
"fault-injection-test/disk-tests/read-only-test" is not tested IMHO. The conf 
directory should be moved out to a rw area as the configuration are generated 
runtime based on the environment variables (I am fine to fix ii in a follow-up 
jira){quote}

Are you using -Pit to trigger to profile to run?  I agree that there are 
missing runtime configuration, and java code to exercise the ozone cluster.  
They are not part of current issue because HDDS-1495 is not settled.  I agree 
this area needs work in a follow-up issue and getting HDDS-1495 settled is our 
top priority.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-08 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835380#comment-16835380
 ] 

Elek, Marton commented on HDDS-1458:


Thanks the v004 patch [~eyang].

Some comments:

1. I like the separated projects of the different fault-injection tests.

2. However you removed the blockade test from the release package. Can you 
please keep this functionality (to execute tests from distribution tar) as of 
now? And we agreed to support the execution of the tests in both way, I would 
suggest to put all the new tests to the tar file as well. Maybe we can create a 
new subdirectory for all the tests in the tar.

I think it will simplify the build process as you don't need the maintain the 
dev -compose files as they are already added to the distribution under the 
compose folder (even better, we can execute the fault injection tests in any of 
the provided environments: secure/not-secure, etc.)-

3. The docker-compose file of "fault-injection-test/disk-tests/read-only-test" 
is not tested IMHO. The conf directory should be moved out to a rw area as the 
configuration are generated runtime based on the environment variables (I am 
fine to fix ii in a follow-up jira)

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-08 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835357#comment-16835357
 ] 

Elek, Marton commented on HDDS-1458:


{quote}I don't see your patch in this issue. Where did you upload the patch?
{quote}
Sorry, it may be lost. I created a PR to separate it from the patches.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-08 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835355#comment-16835355
 ] 

Elek, Marton commented on HDDS-1458:


{quote}bq. From today's Ozone meeting, we agree on making sure that fault 
injection test can work with docker image that carry ozone binaries, and docker 
image that mount ozone binaries. This ensure that the current work flow is not 
disrupted. I will update patch to reflect this agreement.
{quote}
Let me summarize it with my words:

We discussed the two main approaches which are suggested in this thread. What 
we agreed (IMHO) is the following:
 # We don't modify the existing approach to use docker containers right now (we 
use volume mounts, we release docker images from separated branch, we include 
smoketest, etc.)
 # We implement the other approach side-by-side with the original approach. 
There will be an option to choose between the approaches.
 # After a while we can evaluate the solutions based on the usage patterns and 
we may or may not change from one to an other.
 # All the tests should be executable with both method including the 
blockade/smoketest/fault injection test.

Please fix me If I don't remember well.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834347#comment-16834347
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
2s{color} | {color:blue} markdownlint was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
2s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
29s{color} | {color:green} The patch generated 0 new + 768 unchanged - 7 fixed 
= 768 total (was 775) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
9s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m  
6s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 51s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2674/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967990/HDDS-1458.004.patch |
| Optional Tests | dupname asflicense hadolint shellcheck 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834291#comment-16834291
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2674/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834263#comment-16834263
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] Patch 004 will allow both type of docker container to work.  The 
requisite for development docker image is to make sure that dist project is 
built with target/ozone-${project.version} directory available for mounting.  

Command line usage for development test (with hadoop-runner image):
{code}
mvn clean verify -Pit
{code}

Command line usage for binary distribution test (with 
apache/ozone-${project.version} image):
{code}
mvn clean verify -Pit,dist
{code}

Does this arrangement work for you?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834164#comment-16834164
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}Not exactly, I am talking about a differnt thing. Let's imagine that you 
start two build paralell. How do you know which image is used to execute the 
tests? You can't be sure. There is no direct relation ship between checked out 
source code, docker image which is created and the compose files. (We need 
build specific tag names for that which are saved to somewhere).{quote}

This can be done by using:

{code}
mvn -Dversion=[customized_version]
{code}

This property will be materialized via maven resource plugin of Dockerfile with 
customized version.  Hence, each build generates a unique version number that 
is identical between jar, tar, and docker image.  Post build will need to have 
a clean up script to delete old Docker images.  I think this may have already 
been done in most projects on Jenkins.

{quote}BTW, the original question (how the blockade/fault injection tests can 
be executed as part of the maven build) can be solved easily (IMHO). I uploaded 
the patch to demonstrate it.{quote}

I don't see your patch in this issue.  Where did you upload the patch?

>From today's Ozone meeting, we agree on making sure that fault injection test 
>can work with docker image that carry ozone binaries, and docker image that 
>mount ozone binaries.  This ensure that the current work flow is not 
>disrupted.  I will update patch to reflect this agreement.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-06 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833630#comment-16833630
 ] 

Elek, Marton commented on HDDS-1458:


BTW, the original question (how the blockade/fault injection tests can be 
executed as part of the maven build) can be solved easily (IMHO). I uploaded 
the patch to demonstrate it.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-06 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833629#comment-16833629
 ] 

Elek, Marton commented on HDDS-1458:


Thanks the answers [~eyang].

I think our views are closer. At lease we can agree in some technical 
properties. The big difference (IMHO) that we have different view about the 
importance of some issue/some usecases. (The proposed solution introduces heavy 
limitation on the current usage patterns).

The other differences that I can see multiple, different type of usage of 
containers. Fix me If I am wrong, but as far as I understood you would like to 
use containers with the same way everywhere. I will try to explain this at the 
end of this comment.


bq. The docker process adds 32 seconds. It is 26% increase in build time, 

Yes, we agree that it's slower.

Without SSD it's even slower.

Yes, I think it's a problem this is the reason why I put it under the "Cons" 
list

No, we can't skip docker image creation as the pseudo-cluster creation should 
be available for all of the time (as it's available as of now with the current 
solution). I think BOTH the unit tests AND the integration tests should be 
checked ALL the time.

SUMMARY:

 * What we agree: build is significat slower
 * What we don't agree: I think we need to keep it simple and fast to execute 
the smoketest, ALL the time. This is the supported by the proposed solution.

bq.  2. It's harder to test locally the patches. The reproducibility are 
decreased. (Instead of the final build a local container is used).

bq. Not true. Docker can use a local container or use official image by 
supplying -DskipDocker flag, and let docker compose yaml file decide to use 
local image or official released binary for fault-injection-test maven project. 
If you want to apply patch to official released docker image, then you are are 
already replacing binaries in docker image, it is no longer the official 
released docker image. Therefore, what is wrong with using a local image that 
give you exactly same version of everything that is described in Dockerfile of 
the official image?

Not exactly, I am talking about a differnt thing. Let's imagine that you start 
two build paralell. How do you know which image is used to execute the tests? 
You can't be sure. There is no direct relation ship between checked out source 
code, docker image which is created and the compose files. (We need build 
specific tag names for that which are saved to somewhere).

With mounting the volumes (as we do it now) we can be sure and you can execute 
multiple smoketests paralell.

 SUMMARY:

 * I think we are talking about different things
 * I think the paralell executing is borken by the proposed solution


bq. 3. It's harder to test a release package from the smoketest directory.

bq. Smoketest can converted to another submodule of ozone. The same 
-DskipDocker can run smoketest with official build without using local image. I 
will spend some time on this part of the project to make sure that I didn't 
break anything.

Please don't do it. As I wrote I would like to keep the possiblity to execute 
the smoketests WITHOUT build. I think this is a very useful feature to 

 1. Test convenience binary package during the vote
 2. Smoketest different install (eg. kubernetes install)

Please check my last vote. I executed the smoketest for both the src package 
and the bin package to be sure that both are good.

SUMMARY:

 * You think it's enough to execute smoketest during the build
 * I think it's very important to make it possible to run smoketests without a 
build from any install (convenience binary, kubernetes, on-prem install, etc.)

bq. Security fixes can be applied later to the docker containers.

bq. The correct design is to swap out the docker container completely with a 
new version. Patch and upgrade strategy should not be in-place modification of 
the docker container that grows over time. Overlay file system will need to be 
committed to retain state. By running container with in place binary 
replacement can lead to inconsistent state of docker container when power 
failure happens.

Again, I am not sure that we are talking about the same problem. I would like 
to "swap out" the old docker images completly. And because the first layers are 
updated it won't "grow" over time.

But let's make the conversation more clean: let's talk about tags. If I 
understood well, in case of a security issue you would like to drop (?) all the 
old images (eg. hadoop:2.9.0, hadoop:3.1.0, hadoop:3.2.0, hadoop:3.2.1) and 
create a new one (hadoop:3.2.2). 

First of all, I don't know how would you like to drop the images (as of now you 
need to create an INFRA ticket for that one). But there could be a lot of users 
of the old images. What would you do with them? With dropping old images you 
would break a lot of users (it's exactly the same as _deleting_ all the 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833024#comment-16833024
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote}1. The builds are slower, we would have huge number unnecessary 
io operation.{quote}

Without running unit tests, Ozone maven build is about 2 minutes on my laptop 
with virtualbox.  The docker process adds 32 seconds.  It is 26% increase in 
build time, but user can use -DskipDocker flag to get exactly the same 2 
minutes build.  I don't think this is a deal breaker when docker build is 
useful for integration test instead of unit test.  Docker image building 
frequency doesn't need to be as frequent, if code can be unit tested.

{quote}2. It's harder to test locally the patches. The reproducibility are 
decreased. (Instead of the final build a local container is used).{quote}

Not true.  Docker can use a local container or use official image by supplying 
-DskipDocker flag, and let docker compose yaml file decide to use local image 
or official released binary for fault-injection-test maven project.  If you 
want to apply patch to official released docker image, then you are are already 
replacing binaries in docker image, it is no longer the official released 
docker image.  Therefore, what is wrong with using a local image that give you 
exactly same version of everything that is described in Dockerfile of the 
official image?

{quote}3. It's harder to test a release package from the smoketest 
directory.{quote}

Smoketest can converted to another submodule of ozone.  The same -DskipDocker 
can run smoketest with official build without using local image.  I will spend 
some time on this part of the project to make sure that I didn't break anything.

{quote}4. Security fixes can be applied later to the docker containers.{quote}

The correct design is to swap out the docker container completely with a new 
version.  Patch and upgrade strategy should not be in-place modification of the 
docker container that grows over time.  Overlay file system will need to be 
committed to retain state.  By running container with in place binary 
replacement can lead to inconsistent state of docker container when power 
failure happens.

{quote}5. It conflicts if more than one builds are executed on the same machine 
(docker images are shared but volume mounts are separated). (Even just this one 
is a blocker problem for me){quote}

Does this mean you have multiple source tree running build?  Why not use 
multiple start-build-env.sh on parallel source trees?  I think it can provide 
the same multi-build process, but need to test it out.

{quote}6. Some tests can't be executed from the final tarball. (I would like to 
execute tests from the released binary, as discussed earlier in the original 
jira).{quote}

Same answer as before use -DskipDocker flag, or use mvn install final tarball, 
and run docker build.  Maybe I am missing something, please clarify.

I have a hard time with Ozone stable _environment_.  Hadoop-runner is a read 
only file system without symlink support, and all binaries and data are mounted 
from external volume.  What benefit does this provide?  Everything is 
interacting with outside of the container environment.  What containment does 
this style of docker image provides?  I only see process namespace isolation, 
network interface simulation.  But it lacks of ability to make docker container 
idempotent and reproducible else where.  This can create problems that code 
only works in one node but can not reproduce else where.  I often see 
distributed software being developed on one laptop, and having trouble to run 
in cluster of nodes.  The root cause is the mounted volume is shared between 
containers, and developers often forgot about this and wrote code that does 
local IO access.  When it goes to QA, nothing works in distributed nodes.  It 
would be good to prevent ourselves from making this type of mistakes by not 
sharing the same mount points between containers as base line.  I know that you 
may not have this problem with deeper understanding of distributed file system. 
 However, it is common mistake among junior system developers.  Application 
programmers may use shared volumes to exchange data between containers, but not 
the team that suppose to build distributed file system.  I would like to 
prevent this type of silliness from happening by making sure that ozone 
processes don't exchange data via shared volumes.  

I haven't seen a project in Dockerhub that use the same technique as option 2.  
Can you show me some examples of similar projects?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-04 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833006#comment-16833006
 ] 

Elek, Marton commented on HDDS-1458:


Thank you very much [~eyang] to repeat the same arguments in more details. 
Unfortunately I don't feel that my concerns are not addressed. Let me quickly 
summarize the pro and con arguments which are discussed until now to make it 
clear why I think this one. (And please fix me If I am wrong).

First of all, this patch is not about  "Creating a maven profile to run fault 
injection tests". It's mainly about modifying the build process to use inline 
docker image creation instead of the current solution. 

Pros:

 1. The compose folder of the distribution can be copied to an other machine 
_without_ copying the distribution folder.
 2. An other arguments are also added that this approach is more aligned with 
the original spirit/intention of containerization.

Cons:

 1. The builds are slower, we would have huge number unnecessary  io operation.
 2. It's harder to test locally the patches. The reproducibility are decreased. 
(Instead of the final build a local container is used).
 3. It's harder to test a release package from the smoketest directory.
 4. Security fixes can be applied later to the docker containers.
 5. It conflicts if more than one builds are executed on the same machine 
(docker images are shared but volume mounts are separated). (Even just this one 
is a blocker problem for me)
 6. Some tests can't be executed from the final tarball. (I would like to 
execute tests from the released binary, as discussed earlier in the original 
jira). 

I wrote it down to help to understand why I can't see any benefit to do it in 
the other way, and why I feel it's a dangerous step. And let me clarify again. 
It's not against executing the fault injection as part of the build. I very 
like your idea. I am against replacing the current way to use the 
smoketest/compose files without archiving consensus.

And now let me answer to your points in more details:
 
bq. There was no one disputed against using dist profile and I filed YARN-9523 
to correct that mistake

I have different view. I have concerns about using docker in this way (even in 
the dist profile). It could be my fault but I don't feel that my questions are 
addressed in the discussion mailing list. (See my last mail without answers: 
https://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201903.mbox/%3C5bfeb864-3f26-1ccc-3300-2680e1b94f34%40apache.org%3E)

 bq. Dev can be tested with tarball. There is no need to involve docker until 
the finished goods are ready for transport

Thank you very much to write it down, because it helped me to understand the 
fundamental differences between our views. I think it's not true at all. During 
the ozone development we use docker-compose to start pseudo cluster in every 
hour. docker-compose based clusters are used to test patches, test local  
changes, etc. I think this is the most important thing: this is more like a dev 
tool and a tool to try out ozone from dist folder than a production tool.

May I ask you to try out this kind of development workfow to understand my 
view? Just try to test all the reviewed patches in pseudo clusters.

bq. Dev can be tested with tarball.

Let me ask again, because I may misunderstood your points. What is your 
suggested way to test a security patch in a pseudo-cluster?

bq. Docker image is used as a configuration file transport mechanism. I think 
it's convoluted process. There are more efficient way to transport config files 
IMHO. Those instructions requires to run docker rm command to destroy the 
instances as well. 

I understand your concerns, but why is it better to use scp to copy 
configuration from one location to other? The docker image is NOT created here 
to be used as a configuration file transport mechanism. It's an optional 
feature to provide  a _default_ set of configuration. After the first run you 
can copy the docker-compose file exactly the same way as you wished earlier.

bq. Wouldn't it be better that we just give UX team yaml file, and let their 
docker-compose fetch self contained docker images from dockerhub without 
downloading the tarball?  

I am not sure if I understand. What is UX team? Maybe I used wrong words (sorry 
for that). When I used the UX, I thought about the developer user experience. 

bq. It seems that we are using the tools in most inappropriate way of it's 
design that created more problems.

Could you please explain the "created problems" in more details. To be honest 
for me everything just works. I recommend to define the problems first and 
tries to find answers for that together. For example if the problem is to run 
blockade test during the build, it can be adressed in a very easy way to 
execute the blockade tests after the distribution assembly in the 
hadoop-ozone/dist project.


[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832976#comment-16832976
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
2s{color} | {color:blue} markdownlint was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
30s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
37s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  8s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2671/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967822/HDDS-1458.003.patch |
| Optional Tests | dupname asflicense hadolint shellcheck shelldocs mvnsite 
markdownlint compile 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832919#comment-16832919
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2671/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832918#comment-16832918
 ] 

Eric Yang commented on HDDS-1458:
-

Patch 003 moved the docker image build logic to HADOOP-16091 and rebase to 
current trunk.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832060#comment-16832060
 ] 

Eric Yang commented on HDDS-1458:
-

[~ebadger] On the mailing thread, [~ste...@apache.org] suggested to use "dist" 
profile.  Everyone agreed on optional profile, hence I created docker profile.  
My mistake was not using dist profile.  There was no one disputed against using 
dist profile and I filed YARN-9523 to correct that mistake.  I think 
reiteration to make docker profile as dist profile is a good correction toward 
the agreed end goal.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832052#comment-16832052
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote}starting docker based pseudo cluster from a (released or dev) 
distribution. In this case the mount is not a problem. I think here we should 
use mounting to ensure we have exactly the same bits inside and outside. I 
can't see any problem here.{quote}

Dev can be tested with tarball.  There is no need to involve docker until the 
finished goods are ready for transport. Mount binaries outside of container is 
making it more difficult to transport goods, hence defeats the purpose to use 
container technology in the first place.

{quote}The second use case is to provide independent, portable docker-compose 
files. We have this one even now:{quote}

Docker image is used as a configuration file transport mechanism.  I think it's 
convoluted process.  There are more efficient way to transport config files 
IMHO.  Those instructions requires to run docker rm command to destroy the 
instances as well.  

{quote}5. The release tar file also contains the compose directory. I think 
it's a very important part. With mounting the distribution package from the 
docker-compose files we can provide the easiest UX to start a pseudo cluster 
without any additional image creation.{quote}

Wouldn't it be better that we just give UX team yaml file, and let their 
docker-compose fetch self contained docker images from dockerhub without 
downloading the tarball?  It seems that we are using the tools in most 
inappropriate way of it's design that created more problems.  There is no 
guarantee for UX to get a successful run because tarballs and container images 
may change overtime.  There is no crc check or any transaction versioning 
mechanism to ensure downloaded tarball can work with x version of docker image. 
 I am quite puzzle on the route chosen to mount binaries, it is like ordering a 
container for moving between houses and put a key in container while leaving 
all furniture outside of container.  It works, but completely impractical.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831742#comment-16831742
 ] 

Eric Badger commented on HDDS-1458:
---

bq. Jonathan Eagles Eric Badger, please speak up if you have concerns to rename 
docker profile to dist profile.

As I commented in YARN-7129, I am against adding mandatory Docker image builds 
to the default Hadoop build process. The community came to this same consensus 
via [this mailing list thread| 
https://lists.apache.org/thread.html/c63f404bc44f8f249cbc98ee3f6633384900d07e2308008fe4620150@%3Ccommon-dev.hadoop.apache.org%3E].
 

However, I am not an HDDS developer and do not have proper insight into HDDS 
development. So I can only give my thoughts on this from a YARN perspective. 
Maybe this is a great idea for HDDS, maybe it's not. Since I don't know 
anything about HDDS, I can't really give you an opinion. But I think that it 
definitely warrants getting more eyes and reviews on this from the HDDS 
community 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831740#comment-16831740
 ] 

Elek, Marton commented on HDDS-1458:


Thank you very much to give me more details.

I have slightly different view of this problem. If I understood well there is 
no technical limitations as of now it's more like a semantic question what is 
the good usage of docker or what is well designed. To be honest it's hard to 
judge for me. I think there is more than one way to do it well.

For example I can see are two different use cases:

  1. starting docker based pseudo cluster from a (released or dev) 
distribution. In this case the mount is not a problem. I think here we should 
use mounting to ensure we have exactly the same bits inside and outside. I 
can't see any problem here.  

 2. The second use case is to provide independent, portable docker-compose 
files. We have this one even now:

{code}
docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
docker run apache/ozone cat docker-config > docker-config 
docker-compose up -d
{code}

3. I have slightly different experience with docker image creation. I now how 
the layering works but the current structure of the project (especially the 
creation of two 100Mb shaded jar files) can be handled very well. We need new 
layers anyway even if we changed only one class file in the project.

4. I didn't check the source of the maven docker plugin but based on the output 
it has some additional copy. I need to check.

5. The release tar file also contains the compose directory. I think it's a 
very important part. With mounting the distribution package from the 
docker-compose files we can provide the easiest UX to start a pseudo cluster 
without any additional image creation.

6. FYI: the docker image creation as part of the maven build is added with the 
latest k8s patches. I agree with you that it can help at same cases. But I 
don't think that we need to replace the volume mounts in the compose files.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831724#comment-16831724
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] I will rebase the patch to current trunk today.

{quote}Can you please describe what is the problem exactly? I have some 
concerns to create a docker image with each build. It's time and space 
consuming. {quote}

The current ozone docker image is not easily transportable.  A well design 
docker image should have no host level binary dependency.  In order to share 
the ozone docker image to another host, the ozone tarball location on the host 
must be copied to a second host in order to reproducible the in-sync state as 
the docker image from the first host, and use docker-compose to lock host level 
binaries and docker image together to produce a functional system.  This is not 
intended approach to use Docker image.

If there is no change to files that is used to build docker image layer, it 
will simply use a reference count instead of regenerating entire layer.  Each 
line in the Dockerfile produce a docker image immutable layer, and docker is 
pretty good to cache the output without having to rebuild everything from 
scratch.  A well designed docker image build process may take minutes in it's 
first time build, but subsequence build only takes sub-seconds to perform.  
Unless the layers have changed, otherwise they do not take up more space than a 
reference count.  Ozone tar stitching is same as building a layer of Docker but 
at host level.  Since any kind of system test, we are already doing tar/untar 
operations. 
 The cost of building a Ozone image is same as running tar expand, the cost can 
easily be justified.  The misconception of docker build process is expensive, 
it really depends on how the code is structured.  If the high frequency changes 
are placed toward end of the image creation, then the time spend in docker 
build can be really small.  We can always skip docker image build process with 
-DskipDocker.  This is similiar to -DskipShade for people that don't work in 
those areas.

{quote}I believe it's more effective to use just the hadoop-runner image and 
mount the built artifact. I would like to understand the problem in more 
details to find an effective solution. Wouldn't be enough to use the compose 
files form the target folder?{quote}

Docker image are designed to host binary executables as layers of immutable 
file system changes.  This provides predictable out come when binary are 
swapped out between container instances.  When binary executables are outside 
of docker container, the stability of the docker instance depends on the 
external mounted binaries.  By mounting external executable binaries, it become 
less reproducible because it has heavy dependency on external mount point state 
being in sync with container image states.  Standalone docker image are more 
effective way to share containers than docker-compose stitch host level 
binaries together with a empty container.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831550#comment-16831550
 ] 

Elek, Marton commented on HDDS-1458:


Thanks the patch [~eyang] the improvement. As of now I can't check it as it 
should be rebased (github pull requests have no such restrictions they can be 
checked from the original PR branch even if they are conflicted with trunk).

bq.  There is a problem with Ozone docker image is that it mounts ozone tarball 
in expanded form from dist/target directory. This prevents integration-test to 
reiterate on the same ozone binaries.

Can you please describe what is the problem exactly? I have some concerns to 
create a docker image with each build. It's time and space consuming. I believe 
it's more effective to use just the hadoop-runner image and mount the built 
artifact. I would like to understand the problem in more details to find an 
effective solution. Wouldn't be enough to use the compose files form the target 
folder?

ps: I didn't check the patch yet, as it's conflicting but in case of having 
multiple fundamental changes, can be better to commit it in multiple smaller 
parts (IMHO)

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831318#comment-16831318
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 24m 
25s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
13s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 29m  
8s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
19s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 6 new + 0 unchanged - 0 fixed = 6 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
23s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
24s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
14s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
35s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  7m  
2s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  6s{color} 
| 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831217#comment-16831217
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2670/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831213#comment-16831213
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] Patch 002 move blockade tests from dist to 
fault-injection-test/network-tests.  It also created a setup for disk based 
tests.  There is a problem with Ozone docker image is that it mounts ozone 
tarball in expanded form from dist/target directory.  This prevents 
integration-test to reiterate on the same ozone binaries.  I made a fix for 
docker image to build as a separate submodule using maven-assembly instead of 
dist-tar-stitching.  Dist-tar-stitching was invented to fix symlink support in 
tarball.  However, it introduced a regression in maven dependency.  Maven is 
unable to reference ozone tarball for docker image build.  I verified that 
ozone tarball does not require symlink, therefore, switching to maven-assembly 
can produce Docker Ozone image in maven build more efficiently. 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-04-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830481#comment-16830481
 ] 

Eric Yang commented on HDDS-1458:
-

{quote}I don't think it's a strong limitation to make the docker run to 
privileged.\{quote}

Cool, I will keep --privileged option in the patch.

{quote}But it may be useful to move out blockade tests from dist to a separated 
project if you prefer it.\{quote}

I look at integration-test project and found that it was used to build 
miniozone cluster.  I agree that blockade code belongs to a separate project 
from dist and integration-test, and have done the refactoring for next patch.

{quote}I would use a dedicated profile. The standard mvn verify (or mvn 
install) is expected to be used by new contributors. I would keep that one as 
simple as possible.\{quote}

I will keep -Pit for triggering the fault injection tests.

{quote}One additional question about disk failure testing (If I understood well 
that will be implemented as a next step). How they are connected with blockade 
tests? Do you need any functionality from blockade? Can we use the robot test 
based smoketests for the same?\{quote}

Disk tests are written using docker-compose with mountable data disks.  They 
are separate projects from network tests.  I basically created a submodule 
called fault-injection-test and having disk-tests and network-tests as 
submodule.  In disk-tests, there are read-write-test, read-only-test, and 
corruption-test.  Blockcade tests are stored in network-tests.  Jenkinsfile 
will be updated to include -Pit profile to trigger the tests.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-04-30 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830311#comment-16830311
 ] 

Elek, Marton commented on HDDS-1458:


One additional question about disk failure testing (If I understood well that 
will be implemented as a next step). How they are connected with blockade 
tests? Do you need any functionality from blockade? Can we use the robot test 
based smoketests for the same?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >