How about the Hive dependency? We use ThriftServer, serdes and even the
parser/execute logic in Hive. Where will we direct about this part?
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/A-proposal-for-Spark-2-0-tp15122p15793.html
Sent from the Apache
+1
Tested with building with Hadoop 2.7.0 and running with tests:
WordCount in yarn-client/yarn-cluster mode works fine;
Basic sql queries are passed;
“spark.sql.autoBroadcastJoinThreshold” works fine;
Thrift Server is fine;
Running streaming with kafka is good;
External shuffle in YARN mode is f