errose28 commented on code in PR #6458:
URL: https://github.com/apache/ozone/pull/6458#discussion_r1548677674


##########
hadoop-ozone/dist/src/main/compose/testlib.sh:
##########
@@ -574,3 +574,35 @@ wait_for_root_certificate(){
   echo "Timed out waiting on $count root certificates. Current timestamp " 
$(date +"%T")
   return 1
 }
+
+execute_s3a_tests() {
+  local bucket="$1"
+
+  if [[ -z ${bucket} ]]; then
+    echo "Required argument: the S3 bucket to be tested" >&2
+    return 1
+  fi
+
+  if [[ -z ${HADOOP_AWS_DIR} ]] || [[ ! -e ${HADOOP_AWS_DIR} ]] || [[ ! -d 
${HADOOP_AWS_DIR}/src/test/resources ]]; then
+    echo "Set HADOOP_AWS_DIR to the directory with hadoop-aws sources" >&2
+    return 1
+  fi
+
+  if [[ -z ${OZONE_S3G_ADDRESS} ]]; then
+    echo "Set OZONE_S3G_ADDRESS to the address of S3 Gateway" >&2
+    return 1
+  fi
+
+  pushd ${HADOOP_AWS_DIR}
+  mvn -B -V --no-transfer-progress \
+    -Dtest.fs.s3a.endpoint="${OZONE_S3G_ADDRESS}" \
+    -Dcustom.fs.s3a.name="s3a://${bucket}/" \
+    -Dtest='ITestS3AContract*, !ITestS3AContractDistCp, !ITestS3AContractEtag, 
!ITestS3AContractGetFileStatusV1List, !ITestS3AContractMkdir, 
!ITestS3AContractRename' \

Review Comment:
   Let's add a comment explaining these are excluded because they don't yet 
pass and linking to the existing Jira issues (or maybe just the parent jira).



##########
hadoop-ozone/dist/src/main/compose/common/s3a-test.sh:
##########
@@ -0,0 +1,41 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+if [[ ${SECURITY_ENABLED} == "true" ]]; then
+  execute_robot_test s3g kinit.robot
+  # TODO get secret

Review Comment:
   Let's clarify this comment to note that secure testing of s3a is not yet 
supported.



##########
hadoop-ozone/dist/src/main/compose/testlib.sh:
##########
@@ -574,3 +574,35 @@ wait_for_root_certificate(){
   echo "Timed out waiting on $count root certificates. Current timestamp " 
$(date +"%T")
   return 1
 }
+
+execute_s3a_tests() {

Review Comment:
   Why not add this function to `s3a-test.sh`? It seems better suited there 
since that is what is driving the tests while this file has more general helper 
functions.



##########
hadoop-ozone/dev-support/checks/acceptance.sh:
##########
@@ -19,15 +19,19 @@ set -u -o pipefail
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 cd "$DIR/../../.." || exit 1
 
-source "${DIR}/_lib.sh"
+OZONE_ROOT=$(pwd -P)

Review Comment:
   Why is this resolved to the current working dir instead of the absolute 
location based on `DIR`? Would users invoking this script manually from a 
different directory get incorrect results?



##########
hadoop-ozone/dist/src/main/compose/testlib.sh:
##########
@@ -574,3 +574,35 @@ wait_for_root_certificate(){
   echo "Timed out waiting on $count root certificates. Current timestamp " 
$(date +"%T")
   return 1
 }
+
+execute_s3a_tests() {
+  local bucket="$1"
+
+  if [[ -z ${bucket} ]]; then
+    echo "Required argument: the S3 bucket to be tested" >&2
+    return 1
+  fi
+
+  if [[ -z ${HADOOP_AWS_DIR} ]] || [[ ! -e ${HADOOP_AWS_DIR} ]] || [[ ! -d 
${HADOOP_AWS_DIR}/src/test/resources ]]; then
+    echo "Set HADOOP_AWS_DIR to the directory with hadoop-aws sources" >&2
+    return 1
+  fi
+
+  if [[ -z ${OZONE_S3G_ADDRESS} ]]; then
+    echo "Set OZONE_S3G_ADDRESS to the address of S3 Gateway" >&2
+    return 1
+  fi
+
+  pushd ${HADOOP_AWS_DIR}
+  mvn -B -V --no-transfer-progress \
+    -Dtest.fs.s3a.endpoint="${OZONE_S3G_ADDRESS}" \
+    -Dcustom.fs.s3a.name="s3a://${bucket}/" \

Review Comment:
   Where are these properties read in the Hadoop code? I'm only seeing the test 
endpoint referenced 
[here](https://github.com/apache/hadoop/blob/7fb9c306e22eee334667d4b761e3fd330f45cb01/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java#L104).
 Also how do these values interact with auth-keys.xml?



##########
hadoop-ozone/dist/src/main/compose/test-all.sh:
##########
@@ -46,7 +46,8 @@ if [[ "${OZONE_WITH_COVERAGE}" == "true" ]]; then
   cp /tmp/jacoco-combined.exec "$SCRIPT_DIR"/result
 fi
 
-generate_report "acceptance" "${ALL_RESULT_DIR}" "${XUNIT_RESULT_DIR}"
-
+if [[ "${OZONE_ACCEPTANCE_SUITE:-}" != "s3a" ]]; then
+  generate_report "acceptance" "${ALL_RESULT_DIR}" "${XUNIT_RESULT_DIR}"

Review Comment:
   Maybe add a comment or rename this method to clarify that this method only 
applies to tests run with Robot.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to