Github user skonto commented on the issue:
https://github.com/apache/spark/pull/21462
Thank you @foxish I was also thinking of adding the following:
if [ -n "$HADOOP_CONFIG_URL" ]; then
echo "Setting up hadoop config files...."
mkdir -p /etc/hadoop/conf
wget $HADOOP_CONFIG_URL/core-site.xml -P /etc/hadoop/conf
wget $HADOOP_CONFIG_URL/hdfs-site.xml -P /etc/hadoop/conf
export HADOOP_CONF_DIR=/etc/hadoop/conf
fi
I have already tested it with: org.apache.spark.examples.DFSReadWriteTest
At least we get some basic support for hdfs integration. Are you ok with
adding that?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]