PHILO-HE commented on code in PR #11373:
URL:
https://github.com/apache/incubator-gluten/pull/11373#discussion_r2689074720
##########
.github/workflows/velox_backend_x86.yml:
##########
@@ -1601,3 +1601,207 @@ jobs:
**/target/*.log
**/gluten-ut/**/hs_err_*.log
**/gluten-ut/**/core.*
+ hdfs-test-ubuntu:
+ needs: build-native-lib-centos-7
+ strategy:
+ fail-fast: false
+ matrix:
+ os: [ "ubuntu:20.04", "ubuntu:22.04" ]
+ spark: [ "spark-3.2", "spark-3.3", "spark-3.4", "spark-3.5",
"spark-4.0", "spark-4.1" ]
+ java: [ "java-8", "java-11", "java-17", "java-21" ]
Review Comment:
It looks excessive to run the HDFS test with the combinations of the matrix.
Another thought. Instead of a new job, we may enable the HDFS test within
the existing `tpc-test-ubuntu` job. We can target only a single matrix
combination. For example: if os is ubuntu:22.04, spark is 3.5, and java is 17,
we can trigger the HDFS preparation script, then run the full H/DS test with
HDFS. This should be a sufficient coverage and can reduce CI resource cost.
##########
.github/workflows/velox_backend_x86.yml:
##########
@@ -1601,3 +1601,207 @@ jobs:
**/target/*.log
**/gluten-ut/**/hs_err_*.log
**/gluten-ut/**/core.*
+ hdfs-test-ubuntu:
+ needs: build-native-lib-centos-7
+ strategy:
+ fail-fast: false
+ matrix:
+ os: [ "ubuntu:20.04", "ubuntu:22.04" ]
+ spark: [ "spark-3.2", "spark-3.3", "spark-3.4", "spark-3.5",
"spark-4.0", "spark-4.1" ]
+ java: [ "java-8", "java-11", "java-17", "java-21" ]
+ exclude:
+ - spark: spark-3.2
+ java: java-21
+ - spark: spark-3.3
+ java: java-21
+ - spark: spark-3.4
+ java: java-21
+ - spark: spark-3.5
+ java: java-21
+ - spark: spark-3.2
+ java: java-17
+ - spark: spark-3.4
+ java: java-17
+ - spark: spark-3.5
+ java: java-17
+ - spark: spark-3.2
+ java: java-11
+ - spark: spark-3.3
+ java: java-11
+ - spark: spark-3.4
+ java: java-11
+ - os: ubuntu:20.04
+ java: java-17
+ - os: ubuntu:20.04
+ java: java-11
+ - os: ubuntu:20.04
+ java: java-21
+ - spark: spark-4.0
+ java: java-8
+ - spark: spark-4.0
+ java: java-11
+ - spark: spark-4.1
+ java: java-8
+ - spark: spark-4.1
+ java: java-11
+ runs-on: ubuntu-22.04
+ container: ${{ matrix.os }}
+ steps:
+ - uses: actions/checkout@v2
+ - name: Download All Native Artifacts
+ uses: actions/download-artifact@v4
+ with:
+ name: velox-native-lib-centos-7-${{github.sha}}
+ path: ./cpp/build/releases/
+ - name: Setup tzdata
+ run: |
+ #sed -i 's|http://archive|http://us.archive|g' /etc/apt/sources.list
+ if [ "${{ matrix.os }}" = "ubuntu:22.04" ]; then
+ apt-get update
+ TZ="Etc/GMT" DEBIAN_FRONTEND=noninteractive apt-get install -y
tzdata
+ fi
+ - name: Download All Arrow Jar Artifacts
+ uses: actions/download-artifact@v4
+ with:
+ name: arrow-jars-centos-7-${{github.sha}}
+ path: /root/.m2/repository/org/apache/arrow/
+ - name: Setup java
+ run: |
+ if [ "${{ matrix.java }}" = "java-17" ]; then
+ apt-get update && apt-get install -y openjdk-17-jdk wget
+ apt remove openjdk-11* -y
+ elif [ "${{ matrix.java }}" = "java-21" ]; then
+ apt-get update && apt-get install -y openjdk-21-jdk wget
+ elif [ "${{ matrix.java }}" = "java-11" ]; then
+ apt-get update && apt-get install -y openjdk-11-jdk wget
+ else
+ apt-get update && apt-get install -y openjdk-8-jdk wget
+ apt remove openjdk-11* -y
+ fi
+ - name: Setup gluten
+ run: |
+ cd $GITHUB_WORKSPACE/
+ export JAVA_HOME=/usr/lib/jvm/${{ matrix.java }}-openjdk-amd64
+ echo "JAVA_HOME: $JAVA_HOME"
+ case "${{ matrix.spark }}" in
+ spark-4.0|spark-4.1)
+ $MVN_CMD clean install -P${{ matrix.spark }} -P${{
matrix.java }} -Pscala-2.13 -Pbackends-velox -DskipTests
+ ;;
+ *)
+ $MVN_CMD clean install -P${{ matrix.spark }} -P${{
matrix.java }} -Pbackends-velox -DskipTests
+ ;;
+ esac
+ cd $GITHUB_WORKSPACE/tools/gluten-it
+ $GITHUB_WORKSPACE/$MVN_CMD clean install -P${{ matrix.spark }} -P${{
matrix.java }}
+ - name: Install Hadoop
+ shell: bash
+ run: |
+ set -euxo pipefail
+
+ apt-get update
+ apt-get install -y wget tar gzip procps netcat-openbsd openjdk-${{
matrix.java }}-jdk
+
+ HADOOP_VERSION=3.3.6
+ mkdir -p /opt
+ cd /opt
+ wget -q
https://archive.apache.org/dist/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz
+ tar -xzf hadoop-${HADOOP_VERSION}.tar.gz
+ ln -sfn hadoop-${HADOOP_VERSION} hadoop
+
+ JAVA_HOME="/usr/lib/jvm/java-${{ matrix.java }}-openjdk-amd64"
+
+ cat >/etc/profile.d/hadoop.sh <<EOF
+ export HADOOP_HOME=/opt/hadoop
+ export PATH=\$HADOOP_HOME/bin:\$HADOOP_HOME/sbin:\$PATH
+ export JAVA_HOME=${JAVA_HOME}
+ export HADOOP_COMMON_LIB_NATIVE_DIR=\$HADOOP_HOME/lib/native
+ export LD_LIBRARY_PATH=\$HADOOP_HOME/lib/native:\${LD_LIBRARY_PATH:-}
+ export HADOOP_OPTS="-Djava.library.path=\$HADOOP_HOME/lib/native
\${HADOOP_OPTS:-}"
+ EOF
+
+ source /etc/profile.d/hadoop.sh
+ hadoop version
+ test -f "$HADOOP_HOME/lib/native/libhdfs.so"
+ ls -la "$HADOOP_HOME/lib/native" | head -200
+
+ - name: Configure & start HDFS
+ shell: bash
+ run: |
+ set -euxo pipefail
+ source /etc/profile.d/hadoop.sh
+
+ export HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
+
+ cat > "$HADOOP_CONF_DIR/core-site.xml" <<'EOF'
+ <configuration>
+ <property>
+ <name>fs.defaultFS</name>
+ <value>hdfs://127.0.0.1:9000</value>
+ </property>
+ <property>
+ <name>hadoop.tmp.dir</name>
+ <value>/tmp/hadoop</value>
+ </property>
+ </configuration>
+ EOF
+
+ cat > "$HADOOP_CONF_DIR/hdfs-site.xml" <<'EOF'
+ <configuration>
+ <property>
+ <name>dfs.replication</name>
+ <value>1</value>
+ </property>
+ <property>
+ <name>dfs.namenode.name.dir</name>
+ <value>file:/tmp/hdfs/nn</value>
+ </property>
+ <property>
+ <name>dfs.datanode.data.dir</name>
+ <value>file:/tmp/hdfs/dn</value>
+ </property>
+ <property>
+ <name>dfs.permissions</name>
+ <value>false</value>
+ </property>
+ <property>
+ <name>dfs.webhdfs.enabled</name>
+ <value>true</value>
+ </property>
+ </configuration>
+ EOF
+
+ cat > "$HADOOP_CONF_DIR/hadoop-env.sh" <<EOF
+ export JAVA_HOME=${JAVA_HOME}
+ EOF
+
+ mkdir -p /tmp/hdfs/nn /tmp/hdfs/dn /tmp/hadoop
+
+ "$HADOOP_HOME/bin/hdfs" namenode -format -force -nonInteractive
+ "$HADOOP_HOME/bin/hdfs" --daemon start namenode
+ "$HADOOP_HOME/bin/hdfs" --daemon start datanode
Review Comment:
I suggest moving these HDFS setup commands into setup_helper.sh located
under workflows/util. This would keep the workflow file clean and make the
setup logic easier to maintain.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]