This is very experimental and mostly unsupported, but you can start the
JDBC server from within your own programs
<https://github.com/apache/spark/blob/master/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2.scala#L45>
by passing it the HiveContext.

On Fri, Oct 24, 2014 at 2:07 PM, ankits <ankitso...@gmail.com> wrote:

> Thanks for your response Michael.
>
> I'm still not clear on all the details - in particular, how do I take a
> temp
> table created from a SchemaRDD and allow it to be queried using the Thrift
> JDBC server? From the Hive guides, it looks like it only supports loading
> data from files, but I want to query tables stored in memory only via JDBC.
> Is that possible?
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Is-SparkSQL-JDBC-server-a-good-approach-for-caching-tp17196p17235.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to