HyukjinKwon commented on a change in pull request #31303:
URL: https://github.com/apache/spark/pull/31303#discussion_r563230598



##########
File path: .github/workflows/build_and_test.yml
##########
@@ -430,3 +430,38 @@ jobs:
     - name: Build with SBT
       run: |
         ./build/sbt -Pyarn -Pmesos -Pkubernetes -Phive -Phive-thriftserver 
-Phadoop-cloud -Pkinesis-asl -Phadoop-2.7 compile test:compile
+
+  tpcds1g:
+    name: Benchmark TPC-DS with 1GB scale factor
+    runs-on: ubuntu-20.04
+    continue-on-error: true
+    steps:
+      - name: Checkout Spark repository
+        uses: actions/checkout@v2
+      - name: Checkout tpcds-kit repository
+        uses: actions/checkout@v2
+        with:
+          repository: databricks/tpcds-kit
+          path: ./tpcds-kit
+      - name: Checkout spark-sql-perf repository
+        uses: actions/checkout@v2
+        with:
+          repository: wangyum/spark-sql-perf

Review comment:
       Can you describe what these repos do in the PR description? Also what 
does this fork do? Looks like only diff is 
https://github.com/wangyum/spark-sql-perf/commit/abf08ebc0d1006e3511fe6b3608935c627b90741.
 Can you just use the original repo, and keep the data generate Scala file 
somewhere in Spark?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to