[ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187649
 ]

ASF GitHub Bot logged work on HDDS-764:
---------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Jan/19 09:43
            Start Date: 21/Jan/19 09:43
    Worklog Time Spent: 10m 
      Work Description: elek commented on pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249379948
 
 

 ##########
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##########
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
 
 Review comment:
   Yes, we continue.
   
   I also considered to fail from the bash script itself but the always 
continue may be better:
   
    * You will get all of the test results even if one cluster can't be scaled 
up.
    * The bash script could iterate over if the scale up is failed without exit 
with -1 but I am not sure the visibility of the problem in this case
    * The robot tests will be failed anyway and the result will be part of the 
test result.
   
   But I can be convinced to do in a different way.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 187649)
    Time Spent: 1h  (was: 50m)

> Run S3 smoke tests with replication STANDARD.
> ---------------------------------------------
>
>                 Key: HDDS-764
>                 URL: https://issues.apache.org/jira/browse/HDDS-764
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Bharat Viswanadham
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: newbie, pull-request-available
>         Attachments: HDDS-764.001.patch
>
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to