Hello, when I am used kylin1.5.0 to build the cube, I see than MR Jobs took a
lot of time, as I think  the result of MapReduce Job in hadoop are  saved in
HDFS, thus Job should read and write HDFS with increasing frequency. However
, Spark save the result in internal memory which could save time.
I want to konw whether I can use Spark to replace Hadoop when I use kylin?
If it is possible to do than Please tell me how to do this, Thanks....

--
View this message in context: 
http://apache-kylin.74782.x6.nabble.com/Can-kylin-build-cube-based-on-spark-instead-of-hadoop-tp5330.html
Sent from the Apache Kylin mailing list archive at Nabble.com.

Reply via email to