SbloodyS commented on a change in pull request #7974:
URL: https://github.com/apache/dolphinscheduler/pull/7974#discussion_r784027244
##########
File path: .github/workflows/e2e.yml
##########
@@ -27,9 +27,44 @@ concurrency:
group: e2e-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
+
jobs:
+ build:
+ name: E2E-Build
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ with:
+ submodules: true
+ - name: Sanity Check
+ uses: ./.github/actions/sanity-check
+ - name: Cache local Maven repository
+ uses: actions/cache@v2
+ with:
+ path: ~/.m2/repository
+ key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
+ restore-keys: ${{ runner.os }}-maven-
+ - name: Build Image
+ run: |
+ ./mvnw -B clean install \
+ -Dmaven.test.skip \
+ -Dmaven.javadoc.skip \
+ -Dmaven.checkstyle.skip \
+ -Pdocker,release -Ddocker.tag=ci \
+ -pl dolphinscheduler-standalone-server -am
+ - name: Export Docker Images
+ run: |
+ docker save apache/dolphinscheduler-standalone-server:ci -o
/tmp/standalone-image.tar \
+ && du -sh /tmp/standalone-image.tar
+ - uses: actions/upload-artifact@v2
+ name: Upload Docker Images
+ with:
+ name: standalone-image
+ path: /tmp/standalone-image.tar
+ retention-days: 1
e2e:
name: ${{ matrix.case.name }}
Review comment:
https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
```
You can specify the runner type for each job in a workflow. Each job in a
workflow executes in a fresh instance of the virtual machine. All steps in the
job execute in the same instance of the virtual machine, allowing the actions
in that job to share information using the filesystem.
```
This document only explains the difference between steps and jobs, but does
not explain the difference between matrix. I am also very confused about that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]