Github user kl0u commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5807#discussion_r187676881
  
    --- Diff: 
flink-end-to-end-tests/test-scripts/test_queryable_state_restart_tm.sh ---
    @@ -0,0 +1,120 @@
    +#!/usr/bin/env bash
    
+################################################################################
    +#  Licensed to the Apache Software Foundation (ASF) under one
    +#  or more contributor license agreements.  See the NOTICE file
    +#  distributed with this work for additional information
    +#  regarding copyright ownership.  The ASF licenses this file
    +#  to you under the Apache License, Version 2.0 (the
    +#  "License"); you may not use this file except in compliance
    +#  with the License.  You may obtain a copy of the License at
    +#
    +#      http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#  Unless required by applicable law or agreed to in writing, software
    +#  distributed under the License is distributed on an "AS IS" BASIS,
    +#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +#  See the License for the specific language governing permissions and
    +# limitations under the License.
    
+################################################################################
    +
    +source "$(dirname "$0")"/common.sh
    +source "$(dirname "$0")"/queryable_state_base.sh
    +
    
+QUERYABLE_STATE_SERVER_JAR=${TEST_INFRA_DIR}/../../flink-end-to-end-tests/flink-queryable-state-test/target/QsStateProducer.jar
    
+QUERYABLE_STATE_CLIENT_JAR=${TEST_INFRA_DIR}/../../flink-end-to-end-tests/flink-queryable-state-test/target/QsStateClient.jar
    +
    +#####################
    +# Test that queryable state works as expected with HA mode when restarting 
a taskmanager
    +#
    +# The general outline is like this:
    +# 1. start cluster in HA mode with 1 TM
    +# 2. start a job that exposes queryable state from a mapstate with 
increasing num. of keys
    +# 3. query the state with a queryable state client and expect no error to 
occur
    +# 4. stop the TM
    +# 5. check how many keys were in our mapstate at the time of the latest 
snapshot
    +# 6. start a new TM
    +# 7. query the state with a queryable state client and retrieve the number 
of elements
    +#    in the mapstate
    +# 8. expect the number of elements in the mapstate after restart of TM to 
be > number of elements
    +#    at last snapshot
    +#
    +# Globals:
    +#   QUERYABLE_STATE_SERVER_JAR
    +#   QUERYABLE_STATE_CLIENT_JAR
    +# Arguments:
    +#   None
    +# Returns:
    +#   None
    +#####################
    +function run_test() {
    +    local EXIT_CODE=0
    +    local PARALLELISM=1 # parallelism of queryable state app
    +    local PORT="9069" # port of queryable state server
    +
    +    clean_stdout_files # to ensure there are no files accidentally left 
behind by previous tests
    +    link_queryable_state_lib
    +    start_ha_cluster
    +
    +    local JOB_ID=$(${FLINK_DIR}/bin/flink run \
    +        -p ${PARALLELISM} \
    +        -d ${QUERYABLE_STATE_SERVER_JAR} \
    +        --state-backend "rocksdb" \
    +        --tmp-dir file://${TEST_DATA_DIR} \
    +        | awk '{print $NF}' | tail -n 1)
    +
    +    wait_job_running ${JOB_ID}
    +
    +    sleep 20 # sleep a little to have some state accumulated
    +
    +    SERVER=$(get_queryable_state_server_ip)
    +    PORT=$(get_queryable_state_proxy_port)
    +
    +    echo SERVER: ${SERVER}
    +    echo PORT: ${PORT}
    +
    +    java -jar ${QUERYABLE_STATE_CLIENT_JAR} \
    +        --host ${SERVER} \
    +        --port ${PORT} \
    +        --iterations 1 \
    +        --job-id ${JOB_ID}
    +
    +    if [ $? != 0 ]; then
    +        echo "An error occurred when executing queryable state client"
    +        exit 1
    +    fi
    +
    +    kill_random_taskmanager
    +
    +    latest_snapshot_count=$(cat $FLINK_DIR/log/*out* | grep "on snapshot" 
| tail -n 1 | awk '{print $4}')
    +    echo "Latest snapshot count was ${latest_snapshot_count}"
    +
    +    sleep 10 # this is a little longer than the heartbeat timeout so that 
the TM is gone
    +
    +    start_and_wait_for_tm
    +
    +    wait_job_running ${JOB_ID}
    +
    --- End diff --
    
    Instead of just waiting for the job to be running, it is safer to ask 
through `REST` for the successful checkpoints for the job right after killing 
the TM, and then expecting to see more successful checkpoints after the new TM 
is up. This is safer because it guarantees that the backend is initialized 
properly and can be done similarly to how it is done in the case of the 
`test_ha.sh`.


---

Reply via email to