dombizita commented on a change in pull request #3187:
URL: https://github.com/apache/ozone/pull/3187#discussion_r828938302
##########
File path: hadoop-ozone/dist/src/main/compose/testlib.sh
##########
@@ -424,33 +424,37 @@ prepare_for_runner_image() {
## @description Executing the Ozone Debug CLI related robot tests
execute_debug_tests() {
+ local prefix=${RANDOM}
- OZONE_DEBUG_VOLUME="cli-debug-volume"
- OZONE_DEBUG_BUCKET="cli-debug-bucket"
- OZONE_DEBUG_KEY="testfile"
+ local volume="cli-debug-volume${prefix}"
+ local bucket="cli-debug-bucket"
+ local key="testfile"
- execute_robot_test datanode debug/ozone-debug-tests.robot
+ execute_robot_test ${SCM} -v "PREFIX:${prefix}" debug/ozone-debug-tests.robot
- corrupt_block_on_datanode
- execute_robot_test datanode debug/ozone-debug-corrupt-block.robot
+ # get block locations for key
+ local chunkinfo="${key}-blocks-${prefix}"
+ docker-compose exec -T ${SCM} bash -c "ozone debug chunkinfo
${volume}/${bucket}/${key}" > "$chunkinfo"
+ local host="$(jq -r '.KeyLocations[0][0]["Datanode-HostName"]' ${chunkinfo})"
+ local container="${host%%.*}"
- docker stop ozone_datanode_2
+ # corrupt the first block of key on one of the datanodes
+ local datafile="$(jq -r '.KeyLocations[0][0].Locations.files[0]'
${chunkinfo})"
+ docker exec "${container}" sed -i -e '1s/^/a/' "${datafile}"
- wait_for_datanode datanode_2 STALE 60
- execute_robot_test datanode debug/ozone-debug-stale-datanode.robot
- wait_for_datanode datanode_2 DEAD 60
- execute_robot_test datanode debug/ozone-debug-dead-datanode.robot
+ execute_robot_test ${SCM} -v "PREFIX:${prefix}" -v CORRUPT_REPLICA:0
debug/ozone-debug-corrupt-block.robot
Review comment:
As I checked, both the chunkinfo and the read-replicas will always have
the same order of the datanodes for the same key. It is not in alphabetical
order, but it will be the same. Both tools are getting the datanodes for a
block via its pipeline: `List<DatanodeDetails> datanodeList =
pipeline.getNodes();` this will return a `new ArrayList<>(nodeStatus.keySet())`
from the Pipeline's nodeStatus, which will be a LinkedHashMap. As I looked
after it, these conditions will guarantee that the order of the datanodes will
be the same every time we ask from a pipeline.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]