[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=189155=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-189155
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 23/Jan/19 19:37
Start Date: 23/Jan/19 19:37
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 189155)
Time Spent: 2h  (was: 1h 50m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=189151=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-189151
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 23/Jan/19 19:30
Start Date: 23/Jan/19 19:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r250340628
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   Thank You @elek  for clarification.
   Overall it looks good to me.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 189151)
Time Spent: 1h 50m  (was: 1h 40m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=188805=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-188805
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 23/Jan/19 10:26
Start Date: 23/Jan/19 10:26
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r250134623
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   This is a bash feature, SECONDS is incremented under the hood:
   
   ```
SECONDS
 Each  time  this  parameter is referenced, the number of 
seconds since shell invocation is returned.  If a value is assigned to SECONDS, 
the value returned upon subsequent
 references is the number of seconds since the assignment plus 
the value assigned.  If SECONDS is unset, it loses its special properties, even 
if it is subsequently reset.
   ```
   (from man bash)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 188805)
Time Spent: 1h 40m  (was: 1.5h)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187906
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 20:37
Start Date: 21/Jan/19 20:37
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249573167
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   Yes, it is updated it is in the diff. Thanks for the update.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187906)
Time Spent: 1.5h  (was: 1h 20m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187905=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187905
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 20:36
Start Date: 21/Jan/19 20:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249572853
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   So, you want to update it?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187905)
Time Spent: 1h 20m  (was: 1h 10m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187903
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 20:35
Start Date: 21/Jan/19 20:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249572760
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   Yes, but now I got a question, where is this value SLEEP is incremented, as 
we set value SECONDS=0, and after that loop, I don't see it is getting modified.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187903)
Time Spent: 1h 10m  (was: 1h)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187904=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187904
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 20:35
Start Date: 21/Jan/19 20:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249571835
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
 
 Review comment:
   Even if we continue, almost all the tests will fail, as we return with 
replication STANDARD, where we need at least 3 datanodes. But I am fine for 
now, even if we continue, we can improve it later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187904)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187647
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 09:39
Start Date: 21/Jan/19 09:39
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249378537
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   Sure, we can. (If you see the same error message, then the osx already has 
new enough docker-compoe). 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187647)
Time Spent: 50m  (was: 40m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187649
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 09:43
Start Date: 21/Jan/19 09:43
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249379948
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
 
 Review comment:
   Yes, we continue.
   
   I also considered to fail from the bash script itself but the always 
continue may be better:
   
* You will get all of the test results even if one cluster can't be scaled 
up.
* The bash script could iterate over if the scale up is failed without exit 
with -1 but I am not sure the visibility of the problem in this case
* The robot tests will be failed anyway and the result will be part of the 
test result.
   
   But I can be convinced to do in a different way.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187649)
Time Spent: 1h  (was: 50m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=187645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-187645
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 21/Jan/19 09:36
Start Date: 21/Jan/19 09:36
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r249377544
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   I think it's fine. the value of sleep is independent as we check the elapsed 
time based on the $SLEEP environment variable. It will iterate at every 2 
seconds until 30 seconds  (If I didn't miss something)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 187645)
Time Spent: 40m  (was: 0.5h)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=185138=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-185138
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 15/Jan/19 06:13
Start Date: 15/Jan/19 06:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r246960327
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
 
 Review comment:
   And one more question, if wait_for_datanodes() failed to start 3 datanodes, 
we still continue with tests right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 185138)
Time Spent: 0.5h  (was: 20m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=185139=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-185139
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 15/Jan/19 06:13
Start Date: 15/Jan/19 06:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r246957925
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   Should it be dont give it up until 60 seconds. As we have inside sleep of 2 
seconds?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 185139)
Time Spent: 0.5h  (was: 20m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=185137=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-185137
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 15/Jan/19 06:13
Start Date: 15/Jan/19 06:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r246959924
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -38,8 +72,8 @@ execute_tests(){
   echo "-"
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d
-  echo "Waiting 30s for cluster start up..."
-  sleep 30
+  docker-compose -f "$COMPOSE_FILE" scale datanode=3
 
 Review comment:
   Minor Nit:
   Can we use this docker-compose -f "$COMPOSE_FILE" up -d --scale  datanode=3
   
   With above we see this on the console.
   WARNING: The scale command is deprecated. Use the up command with the 
--scale flag instead.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 185137)
Time Spent: 20m  (was: 10m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=183653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-183653
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 10/Jan/19 10:53
Start Date: 10/Jan/19 10:53
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #462: HDDS-764. Run S3 
smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 183653)
Time Spent: 10m
Remaining Estimate: 0h

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org