dongjoon-hyun commented on a change in pull request #35121:
URL: https://github.com/apache/spark/pull/35121#discussion_r780047564
##########
File path: .github/workflows/build_and_test.yml
##########
@@ -96,15 +96,39 @@ jobs:
echo '::set-output name=hadoop::hadoop3'
fi
+ build-precondition:
+ name: Check a code change
+ runs-on: ubuntu-20.04
+ outputs:
+ required: ${{ steps.set-outputs.outputs.required }}
+ steps:
+ - name: Checkout Spark repository
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
+ repository: apache/spark
+ ref: master
+ - name: Sync the current branch with the latest in Apache Spark
+ if: github.repository != 'apache/spark'
+ run: |
+ echo "APACHE_SPARK_REF=$(git rev-parse HEAD)" >> $GITHUB_ENV
+ git fetch https://github.com/$GITHUB_REPOSITORY.git
${GITHUB_REF#refs/heads/}
+ git -c user.name='Apache Spark Test Account' -c
user.email='[email protected]' merge --no-commit --progress --squash
FETCH_HEAD
+ git -c user.name='Apache Spark Test Account' -c
user.email='[email protected]' commit -m "Merged commit"
+ - name: Check all modules except 'docs'
+ id: set-outputs
+ run: |
+ echo "::set-output name=required::$(./dev/is-changed.py -m
avro,build,catalyst,core,docker-integration-tests,examples,graphx,hadoop-cloud,hive,hive-thriftserver,kubernetes,kvstore,launcher,mesos,mllib,mllib-local,network-common,network-shuffle,pyspark-core,pyspark-ml,pyspark-mllib,pyspark-pandas,pyspark-pandas-slow,pyspark-resource,pyspark-sql,pyspark-streaming,repl,sketch,spark-ganglia-lgpl,sparkr,sql,sql-kafka-0-10,streaming,streaming-kafka-0-10,streaming-kinesis-asl,tags,unsafe,yarn)"
Review comment:
Yes, it's really a missing part .
BTW, `is-changed.py` is not designed to support multi-monule-support. Each
precondition has its own `-m` option here. So, this is the best and flexible
way for now.
##########
File path: .github/workflows/build_and_test.yml
##########
@@ -96,15 +96,39 @@ jobs:
echo '::set-output name=hadoop::hadoop3'
fi
+ build-precondition:
+ name: Check a code change
+ runs-on: ubuntu-20.04
+ outputs:
+ required: ${{ steps.set-outputs.outputs.required }}
+ steps:
+ - name: Checkout Spark repository
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
+ repository: apache/spark
+ ref: master
+ - name: Sync the current branch with the latest in Apache Spark
+ if: github.repository != 'apache/spark'
+ run: |
+ echo "APACHE_SPARK_REF=$(git rev-parse HEAD)" >> $GITHUB_ENV
+ git fetch https://github.com/$GITHUB_REPOSITORY.git
${GITHUB_REF#refs/heads/}
+ git -c user.name='Apache Spark Test Account' -c
user.email='[email protected]' merge --no-commit --progress --squash
FETCH_HEAD
+ git -c user.name='Apache Spark Test Account' -c
user.email='[email protected]' commit -m "Merged commit"
+ - name: Check all modules except 'docs'
+ id: set-outputs
+ run: |
+ echo "::set-output name=required::$(./dev/is-changed.py -m
avro,build,catalyst,core,docker-integration-tests,examples,graphx,hadoop-cloud,hive,hive-thriftserver,kubernetes,kvstore,launcher,mesos,mllib,mllib-local,network-common,network-shuffle,pyspark-core,pyspark-ml,pyspark-mllib,pyspark-pandas,pyspark-pandas-slow,pyspark-resource,pyspark-sql,pyspark-streaming,repl,sketch,spark-ganglia-lgpl,sparkr,sql,sql-kafka-0-10,streaming,streaming-kafka-0-10,streaming-kinesis-asl,tags,unsafe,yarn)"
Review comment:
Yes, it's really a missing part .
BTW, `is-changed.py` is not designed to support multi-monule-support. Each
precondition has its own `-m` option here and the values are not the same. So,
this is the best and flexible way for now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]