Github user mccheah commented on the issue:

    https://github.com/apache/spark/pull/20697
  
    In much of the old integration test setup, we were cloning and building 
Spark into a TGZ. I wonder if we can skip both of these steps. We shouldn't 
ever have to clone Spark, as we have the repository here. Building Spark is a 
bit trickier, but we might be able to get around that too.
    
    In the `bin/` directory, all of the scripts can run against a Spark 
repository clone, without the repository having been bundled into a tgz. If we 
look at `bin/spark-class`, we notice the classpath can be picked up from the 
`jars` directory from the build output of the `assembly` project. I wonder if 
we can do something similar here - instead of relying on the built tgz itself, 
intelligently build all the dependents of the integration test into the 
assembly, and pick up the raw Dockerfiles from the repository layout.
    
    Can we look into a non-tgz dependent version of what we have here?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to