Github user steveloughran commented on the issue: https://github.com/apache/spark/pull/20923 I can and do build Hadoop with this local version enabled, so it's easy enough to set things up locally, Indeed the ability to change Hadoop version, [HADOOP-13852](https://issues.apache.org/jira/browse/HADOOP-13852) came about precisely because I was the first person to try and do that hadoop-3+spark test, with a precursor profile for this Get this in and things are set up for the hive work, as everything else is ready for it. We've decoupled the work, and for those people who do have a compatible hadoop/hive setup, then this provides a standard profile for them to use, instead of having to write their own, determine zookeeper and curator versions, etc, etc.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org