Is it possible to use our own metastore instead of Hive Metastore with Spark
SQL?
Can you please point me to some docs or code I can look at to get it done?
We are moving away from everything Hadoop.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
Does MLLIB allow user to specify own loss functions?
Specially need it for Random forests.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/MLLIB-Random-Forest-and-user-defined-loss-function-tp27080.html
Sent from the Apache Spark User List mailing list
What kind of latency are people in achieving in production using spark
streaming? Is it in 1 second+ range. or have people been able to achieve
latency in say 250 ms range.
Any best practices on achieving sub second latency if even possible?
--
View this message in context:
I am running following code on Spark 1.3.0. It is from
https://spark.apache.org/docs/1.3.0/ml-guide.html
On running val model1 = lr.fit(training.toDF) I get
java.lang.UnsupportedOperationException: empty collection
what could be the reason?
import org.apache.spark.{SparkConf,