[ 
https://issues.apache.org/jira/browse/HDDS-1525?focusedWorklogId=274110&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-274110
 ]

ASF GitHub Bot logged work on HDDS-1525:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Jul/19 14:52
            Start Date: 09/Jul/19 14:52
    Worklog Time Spent: 10m 
      Work Description: hadoop-yetus commented on pull request #1065: 
HDDS-1525. Mapreduce failure when using Hadoop 2.7.5
URL: https://github.com/apache/hadoop/pull/1065#discussion_r301631219
 
 

 ##########
 File path: hadoop-ozone/dist/src/main/compose/testlib.sh
 ##########
 @@ -72,21 +72,35 @@ start_docker_env(){
 ## @param        robot test file or directory relative to the smoketest dir
 execute_robot_test(){
   CONTAINER="$1"
-  TEST="$2"
+  shift 1 #Remove first argument which was the container name
+  ARGUMENTS=($@)
+  TEST="${ARGUMENTS[${#ARGUMENTS[@]}-1]}" #Use last element as the test name
+  unset 'ARGUMENTS[${#ARGUMENTS[@]}-1]' #Remove the last element, remainings 
are the custom parameters
   TEST_NAME=$(basename "$TEST")
   TEST_NAME="$(basename "$COMPOSE_DIR")-${TEST_NAME%.*}"
   set +e
   OUTPUT_NAME="$COMPOSE_ENV_NAME-$TEST_NAME-$CONTAINER"
   OUTPUT_PATH="$RESULT_DIR_INSIDE/robot-$OUTPUT_NAME.xml"
   docker-compose -f "$COMPOSE_FILE" exec -T "$CONTAINER" mkdir -p 
"$RESULT_DIR_INSIDE"
-  docker-compose -f "$COMPOSE_FILE" exec -e  
SECURITY_ENABLED="${SECURITY_ENABLED}" -T "$CONTAINER" python -m robot --log 
NONE -N "$TEST_NAME" --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"$OUTPUT_PATH" "$SMOKETEST_DIR_INSIDE/$TEST"
+  docker-compose -f "$COMPOSE_FILE" exec -e  
SECURITY_ENABLED="${SECURITY_ENABLED}" -T "$CONTAINER" python -m robot 
${ARGUMENTS[@]} --log NONE -N "$TEST_NAME" --report NONE 
"${OZONE_ROBOT_OPTS[@]}" --output "$OUTPUT_PATH" "$SMOKETEST_DIR_INSIDE/$TEST"
 
   FULL_CONTAINER_NAME=$(docker-compose -f "$COMPOSE_FILE" ps | grep 
"_${CONTAINER}_" | head -n 1 | awk '{print $1}')
   docker cp "$FULL_CONTAINER_NAME:$OUTPUT_PATH" "$RESULT_DIR/"
   set -e
 
 }
 
+
+## @description  Execute specific command in docker container
+## @param        container name
+## @param        specific command to execute
+execute_command_in_container(){
+  set -e
+  docker-compose -f "$COMPOSE_FILE" exec $@
 
 Review comment:
   shellcheck:42: error: Double quote array expansions to avoid re-splitting 
elements. [SC2068]
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 274110)
    Time Spent: 1h 10m  (was: 1h)

> Mapreduce failure when using Hadoop 2.7.5
> -----------------------------------------
>
>                 Key: HDDS-1525
>                 URL: https://issues.apache.org/jira/browse/HDDS-1525
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Filesystem
>    Affects Versions: 0.4.0
>            Reporter: Sammi Chen
>            Assignee: Elek, Marton
>            Priority: Blocker
>              Labels: pull-request-available
>         Attachments: HDDS-1525.poc.patch, teragen.log
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Integrate Ozone(0.4 branch) with Hadoop 2.7.5, "hdfs dfs -ls /" can pass, 
> while teragen  failed. 
> When add  -verbose:class to java options, it shows that class KeyProvider is 
> loaded twice by different classloader while it is only loaded once when 
> execute  "hdfs dfs -ls /" 
> All jars under share/ozone/lib are added into hadoop classpath except ozone 
> file system current lib jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to