I am dealing with a Lambda Architecture. This means I have Hadoop on the
batch layer, Storm on the speed layer and I'm storing the precomputed views
from both layers in  Cassandra.

I understand that Spark is a substitute for Hadoop but at the moment I would
like not to change the batch layer.

I would like to execute SQL queries on Cassandra using Spark SQL. Is it
possible to get just Spark SQL to run on top of Cassandra, without Spark? My
goal is to access Cassandra data with BI tools. Spark SQL looks like the
perfect tool for this.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-on-Cassandra-tp17812.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to