paul-rogers commented on code in PR #13940:
URL: https://github.com/apache/druid/pull/13940#discussion_r1139411707


##########
.github/workflows/reusable-revised-its.yml:
##########
@@ -72,6 +72,15 @@ jobs:
           path: ./**/target
           key: maven-${{ runner.os }}-${{ inputs.build_jdk }}-targets-${{ 
github.sha }}
 
+      - name: Clean up docker

Review Comment:
   For the "new ITs", the `it.sh` script itself should clean up. There is a bug 
that will be fixed in another PR, but once that is fixed, we want the script to 
do the work. We need it to work for developers, so we want it to work for 
builds also.
   
   Given that, if there are any containers to stop in this script block, then 
that indicates a bug in our `it.sh` script. Thus, it would be good to
   
   * Count the number of running containers.
   * If more than 0:
      * Echo the list of running containers.
      * Do the force shutdown steps
   
   Another question is the set of files not owned by `$USER_NAME`. What might 
cause that to occur? There are several possible cases:
   
   * The cached files grabbed earlier are owned by another user. (This was the 
case on an older proprietary Jenkins setup.)
   * The Docker containers created files owned by another user.
   
   Both cases are bugs. Having mixed ownership will cause all manner of 
problems if it results in an inability to delete files, overwrite files or 
create directories. Thus, rather than simply clean up, we should log such files 
so we can fix them. This comment applies only to the "new IT" case: those run 
by `it.sh`. The old test may do all manner of horrible things.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to