PHILO-HE commented on code in PR #11351:
URL: 
https://github.com/apache/incubator-gluten/pull/11351#discussion_r2684951629


##########
.github/workflows/velox_backend_x86.yml:
##########
@@ -359,179 +356,12 @@ jobs:
               --extra-conf=spark.gluten.ras.enabled=true 
           "
 
-  tpc-test-ubuntu-oom:
-    needs: build-native-lib-centos-7
-    strategy:
-      fail-fast: false
-      matrix:
-        spark: [ "spark-3.2" ]
-    runs-on: ubuntu-22.04
-    steps:
-      - name: Maximize build disk space
-        shell: bash
-        run: |
-          df -h
-          set -euo pipefail
-          echo "Removing unwanted software... "
-          sudo rm -rf /usr/share/dotnet
-          sudo rm -rf /usr/local/lib/android
-          sudo rm -rf /opt/ghc
-          sudo rm -rf /opt/hostedtoolcache/CodeQL
-          sudo docker image prune --all --force > /dev/null
-          df -h
-      - uses: actions/checkout@v2
-      - name: Download All Native Artifacts
-        uses: actions/download-artifact@v4
-        with:
-          name: velox-native-lib-centos-7-${{github.sha}}
-          path: ./cpp/build/releases/
-      - name: Download All Arrow Jar Artifacts
-        uses: actions/download-artifact@v4
-        with:
-          name: arrow-jars-centos-7-${{github.sha}}
-          path: /home/runner/.m2/repository/org/apache/arrow/
-      - name: Setup java
-        run: |
-          sudo apt-get update
-          sudo apt-get install -y openjdk-8-jdk wget
-      - name: Set environment variables
-        run: |
-          echo "JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64" >> $GITHUB_ENV
-      - name: Build for Spark ${{ matrix.spark }}
-        run: |
-          cd $GITHUB_WORKSPACE/ 
-          $MVN_CMD clean install -P${{ matrix.spark }} -Pbackends-velox 
-DskipTests
-          cd $GITHUB_WORKSPACE/tools/gluten-it
-          $GITHUB_WORKSPACE/build/mvn clean install -P${{ matrix.spark }}
-          GLUTEN_IT_JVM_ARGS=-Xmx6G sbin/gluten-it.sh data-gen-only --local 
--benchmark-type=ds -s=30.0 --threads=12
-      - name: TPC-DS SF30.0 Parquet local spark3.2 Q67/Q95 low memory, memory 
isolation off
-        run: |
-          cd tools/gluten-it \
-          && GLUTEN_IT_JVM_ARGS=-Xmx3G sbin/gluten-it.sh parameterized \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--queries=q67,q95 -s=30.0 --threads=12 --shuffle-partitions=72 --iterations=1 \
-            --data-gen=skip -m=OffHeapExecutionMemory \
-            -d=ISOLATION:OFF,spark.gluten.memory.isolation=false \
-            -d=OFFHEAP_SIZE:6g,spark.memory.offHeap.size=6g \
-            -d=OFFHEAP_SIZE:4g,spark.memory.offHeap.size=4g \
-            
-d=OVER_ACQUIRE:0.3,spark.gluten.memory.overAcquiredMemoryRatio=0.3 \
-            
-d=OVER_ACQUIRE:0.5,spark.gluten.memory.overAcquiredMemoryRatio=0.5 \
-            --excluded-dims=OFFHEAP_SIZE:4g
-      - name: TPC-DS SF30.0 Parquet local spark3.2 Q67 low memory, memory 
isolation on
-        run: |
-          cd tools/gluten-it \
-          && GLUTEN_IT_JVM_ARGS=-Xmx3G sbin/gluten-it.sh parameterized \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--queries=q67 -s=30.0 --threads=12 --shuffle-partitions=72 --iterations=1 \
-            --data-gen=skip -m=OffHeapExecutionMemory \
-            
-d=ISOLATION:ON,spark.gluten.memory.isolation=true,spark.memory.storageFraction=0.1
 \
-            -d=OFFHEAP_SIZE:6g,spark.memory.offHeap.size=6g \
-            -d=OFFHEAP_SIZE:4g,spark.memory.offHeap.size=4g \
-            
-d=OVER_ACQUIRE:0.3,spark.gluten.memory.overAcquiredMemoryRatio=0.3 \
-            -d=OVER_ACQUIRE:0.5,spark.gluten.memory.overAcquiredMemoryRatio=0.5
-      - name: TPC-DS SF30.0 Parquet local spark3.2 Q95 low memory, memory 
isolation on
-        run: |
-          cd tools/gluten-it \
-          && GLUTEN_IT_JVM_ARGS=-Xmx3G sbin/gluten-it.sh parameterized \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--queries=q95 -s=30.0 --threads=12 --shuffle-partitions=72 --iterations=1 \
-            --data-gen=skip -m=OffHeapExecutionMemory \
-            
-d=ISOLATION:ON,spark.gluten.memory.isolation=true,spark.memory.storageFraction=0.1
 \
-            -d=OFFHEAP_SIZE:6g,spark.memory.offHeap.size=6g \
-            -d=OFFHEAP_SIZE:4g,spark.memory.offHeap.size=4g \
-            
-d=OVER_ACQUIRE:0.3,spark.gluten.memory.overAcquiredMemoryRatio=0.3 \
-            -d=OVER_ACQUIRE:0.5,spark.gluten.memory.overAcquiredMemoryRatio=0.5
-      - name: TPC-DS SF30.0 Parquet local spark3.2 Q23A/Q23B low memory
-        run: |
-          cd tools/gluten-it \
-          && GLUTEN_IT_JVM_ARGS=-Xmx3G sbin/gluten-it.sh parameterized \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--queries=q23a,q23b -s=30.0 --threads=12 --shuffle-partitions=72 --iterations=1 
\
-            --data-gen=skip -m=OffHeapExecutionMemory \
-            -d=ISOLATION:OFF,spark.gluten.memory.isolation=false \
-            -d=OFFHEAP_SIZE:2g,spark.memory.offHeap.size=2g \
-            
-d=FLUSH_MODE:DISABLED,spark.gluten.sql.columnar.backend.velox.flushablePartialAggregation=false,spark.gluten.sql.columnar.backend.velox.maxPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.maxExtendedPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinPct=100,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinRows=0
 \
-            
-d=FLUSH_MODE:ABANDONED,spark.gluten.sql.columnar.backend.velox.maxPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.maxExtendedPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinPct=0,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinRows=0
 \
-            
-d=FLUSH_MODE:FLUSHED,spark.gluten.sql.columnar.backend.velox.maxPartialAggregationMemoryRatio=0.05,spark.gluten.sql.columnar.backend.velox.maxExtendedPartialAggregationMemoryRatio=0.1,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinPct=100,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinRows=0
-      - name: TPC-DS SF30.0 Parquet local spark3.2 Q23A/Q23B low memory, 
memory isolation on
-        run: |
-          cd tools/gluten-it \
-          && GLUTEN_IT_JVM_ARGS=-Xmx3G sbin/gluten-it.sh parameterized \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--queries=q23a,q23b -s=30.0 --threads=12 --shuffle-partitions=72 --iterations=1 
\
-            --data-gen=skip -m=OffHeapExecutionMemory \
-            
-d=ISOLATION:ON,spark.gluten.memory.isolation=true,spark.memory.storageFraction=0.1
 \
-            -d=OFFHEAP_SIZE:2g,spark.memory.offHeap.size=2g \
-            
-d=FLUSH_MODE:DISABLED,spark.gluten.sql.columnar.backend.velox.flushablePartialAggregation=false,spark.gluten.sql.columnar.backend.velox.maxPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.maxExtendedPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinPct=100,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinRows=0
 \
-            
-d=FLUSH_MODE:ABANDONED,spark.gluten.sql.columnar.backend.velox.maxPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.maxExtendedPartialAggregationMemoryRatio=1.0,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinPct=0,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinRows=0
 \
-            
-d=FLUSH_MODE:FLUSHED,spark.gluten.sql.columnar.backend.velox.maxPartialAggregationMemoryRatio=0.05,spark.gluten.sql.columnar.backend.velox.maxExtendedPartialAggregationMemoryRatio=0.1,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinPct=100,spark.gluten.sql.columnar.backend.velox.abandonPartialAggregationMinRows=0
-      - name: TPC-DS SF30.0 Parquet local spark3.2 Q97 low memory
-        run: |
-          cd tools/gluten-it \
-          && GLUTEN_IT_JVM_ARGS=-Xmx3G sbin/gluten-it.sh parameterized \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--queries=q97 -s=30.0 --threads=12 --shuffle-partitions=72 --iterations=1 \
-            --data-gen=skip -m=OffHeapExecutionMemory \
-            --extra-conf=spark.gluten.sql.columnar.backend.velox.IOThreads=0 \
-            -d=ISOLATION:OFF,spark.gluten.memory.isolation=false \
-            
-d=ISOLATION:ON,spark.gluten.memory.isolation=true,spark.memory.storageFraction=0.1
 \
-            -d=OFFHEAP_SIZE:2g,spark.memory.offHeap.size=2g \
-            -d=OFFHEAP_SIZE:1g,spark.memory.offHeap.size=1g \
-            
-d=IO_THREADS:12,spark.gluten.sql.columnar.backend.velox.IOThreads=12 \
-            -d=IO_THREADS:0,spark.gluten.sql.columnar.backend.velox.IOThreads=0
-
-  tpc-test-ubuntu-randomkill:
-    needs: build-native-lib-centos-7
-    strategy:
-      fail-fast: false
-      matrix:
-        spark: [ "spark-3.2" ]

Review Comment:
   We may still need this test job. Please keep it with Spark version updated 
in this line.



##########
.github/workflows/velox_backend_x86.yml:
##########
@@ -603,17 +433,19 @@ jobs:
         run: |
           sed -i -e "s|mirrorlist=|#mirrorlist=|g" /etc/yum.repos.d/CentOS-* 
|| true
           sed -i -e 
"s|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g" 
/etc/yum.repos.d/CentOS-* || true
-      - name: Setup java
+      - name: Setup java and maven
         run: |
           yum update -y && yum install -y java-1.8.0-openjdk-devel wget
+          source .github/workflows/util/setup-helper.sh
+          install_maven

Review Comment:
   Seems strange to install maven. I assume $MVN_CMD, i.e., build/mvn, will 
download mvn if it is not found in the build environment.



##########
docs/get-started/build-guide.md:
##########
@@ -58,18 +58,17 @@ The below parameters can be set via `-P` for mvn.
 | delta               | Build Gluten with Delta Lake support. | disabled      |
 | iceberg             | Build Gluten with Iceberg support.    | disabled      |
 | hudi                | Build Gluten with Hudi support.       | disabled      |
-| spark-3.2           | Build Gluten for Spark 3.2.           | enabled       |
-| spark-3.3           | Build Gluten for Spark 3.3.           | disabled      |
-| spark-3.4           | Build Gluten for Spark 3.4.           | disabled      |
-| spark-3.5           | Build Gluten for Spark 3.5.           | disabled      |
+| spark-3.2           | Build Gluten for Spark 3.2.           | disabled      |

Review Comment:
   This line can be removed.



##########
backends-velox/src/test/scala/org/apache/gluten/execution/VeloxHashJoinSuite.scala:
##########
@@ -92,9 +92,12 @@ class VeloxHashJoinSuite extends 
VeloxWholeStageTransformerSuite {
 
       // The computing is combined into one single whole stage transformer.
       val wholeStages = plan.collect { case wst: WholeStageTransformer => wst }
-      if (isSparkVersionLE("3.2")) {
-        assert(wholeStages.length == 1)
-      } else if (isSparkVersionGE("3.5")) {
+
+      if (
+        SparkShimLoader.getSparkVersion.startsWith("3.5.") ||
+        SparkShimLoader.getSparkVersion.startsWith("4.0.") ||
+        SparkShimLoader.getSparkVersion.startsWith("4.1.")
+      ) {

Review Comment:
   Nit: suggest inverting the if check. Then when higher Spark versions are 
supported, no need to add conditions.
   ```scala
   if (
           SparkShimLoader.getSparkVersion.startsWith("3.3.") ||
           SparkShimLoader.getSparkVersion.startsWith("3.4.")) {
           assert(wholeStages.length == 3)
         } else {
           assert(wholeStages.length == 5)
   ```



##########
.github/workflows/velox_backend_x86.yml:
##########
@@ -359,179 +356,12 @@ jobs:
               --extra-conf=spark.gluten.ras.enabled=true 
           "
 
-  tpc-test-ubuntu-oom:
-    needs: build-native-lib-centos-7
-    strategy:
-      fail-fast: false
-      matrix:
-        spark: [ "spark-3.2" ]

Review Comment:
   ditto



##########
backends-velox/src/test/scala/org/apache/gluten/execution/VeloxHashJoinSuite.scala:
##########
@@ -74,7 +74,7 @@ class VeloxHashJoinSuite extends 
VeloxWholeStageTransformerSuite {
     }
   }
 
-  testWithMinSparkVersion("generate hash join plan - v2", "3.2") {
+  testWithMinSparkVersion("generate hash join plan - v2", "3.3") {

Review Comment:
   In the existing code, using `testWithMinSparkVersion` is redundant since the 
minimum supported Spark version prior to this PR was already 3.2. By switching 
to a standard `test()` call, we avoid the need for future updates when Spark 
3.3 support is eventually dropped. Similarly, any use of 
`testWithMinSparkVersion("", 3.3)` can be simplified to `test()` in this PR.



##########
.github/workflows/velox_backend_x86.yml:
##########
@@ -198,27 +193,29 @@ jobs:
           $GITHUB_WORKSPACE/$MVN_CMD clean install -P${{ matrix.spark }} -P${{ 
matrix.java }}
           GLUTEN_IT_JVM_ARGS=-Xmx5G sbin/gluten-it.sh queries-compare \
             --local --preset=velox --benchmark-type=h --error-on-memleak 
--off-heap-size=10g -s=1.0 --threads=16 --iterations=1 \
-          && GLUTEN_IT_JVM_ARGS=-Xmx6G sbin/gluten-it.sh queries-compare \
-            --local --preset=velox --benchmark-type=ds --error-on-memleak 
--off-heap-size=10g -s=1.0 --threads=16 --iterations=1
+          && if [ "${{ matrix.spark }}" = "spark-4.0" ]; then \
+               GLUTEN_IT_JVM_ARGS=-Xmx6G sbin/gluten-it.sh queries-compare \
+                 --local --preset=velox --benchmark-type=ds --error-on-memleak 
--off-heap-size=10g -s=1.0 --threads=16 --iterations=1 \
+                 --excluded-queries=q72; \

Review Comment:
   Curious why these changes are needed? Does the test failure for Spark 4.0 
happen occasionally? Maybe, we can move it to a separate PR. We can ignore such 
failures if they are indeed not related to this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to