MartijnVisser commented on a change in pull request #17941:
URL: https://github.com/apache/flink/pull/17941#discussion_r758559976



##########
File path: flink-end-to-end-tests/test-scripts/elasticsearch-common.sh
##########
@@ -37,20 +37,10 @@ function setup_elasticsearch {
     local elasticsearchDir=$TEST_DATA_DIR/elasticsearch
     mkdir -p $elasticsearchDir
     echo "Downloading Elasticsearch from $downloadUrl ..."
-    for i in {1..10};
-    do
-        wget "$downloadUrl" -O $TEST_DATA_DIR/elasticsearch.tar.gz
-        if [ $? -eq 0 ]; then
-            echo "Download successful."
-            echo "Extracting..."
-            tar xzf $TEST_DATA_DIR/elasticsearch.tar.gz -C $elasticsearchDir 
--strip-components=1
-            if [ $? -eq 0 ]; then
-                break
-            fi
-        fi
-        echo "Attempt $i failed."
-        sleep 5
-    done
+    retry_times_with_exponential_backoff 10 wget "$downloadUrl" -O 
$TEST_DATA_DIR/elasticsearch.tar.gz
+    echo "Download successful."
+    echo "Extracting..."
+    tar xzf $TEST_DATA_DIR/elasticsearch.tar.gz -C $elasticsearchDir 
--strip-components=1

Review comment:
       When looking at the test failures, I've only seen instances where the 
connection to download/get data was interrupted. Both `curl` and `git` report 
those as errors which are picked up by the Bash script as in "something went 
wrong" (basically an exit code which is different then 0) and then it tries 
again. I haven't seen instances where the download has been reported as 
completed successfully and then extracting/using the artifacts fails. So should 
we do this hardening now? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to