elek commented on a change in pull request #462: HDDS-764. Run S3 smoke tests
with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249379948
##########
File path: hadoop-ozone/dist/src/main/smoketest/test.sh
##########
@@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
#Should be writeable from the docker containers where user is different.
chmod ogu+w "$DIR/$RESULT_DIR"
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
Review comment:
Yes, we continue.
I also considered to fail from the bash script itself but the always
continue may be better:
* You will get all of the test results even if one cluster can't be scaled
up.
* The bash script could iterate over if the scale up is failed without exit
with -1 but I am not sure the visibility of the problem in this case
* The robot tests will be failed anyway and the result will be part of the
test result.
But I can be convinced to do in a different way.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]