Hi,
Are there any logs in the Spark driver and executors which would help
provide some context? In diagnosing, increasing the log level to DEBUG
might be useful as well.
Also, the snippet you posted is a 'lazy' operation. In theory it should
return quickly, and only evaluate when some sort of
Hi Hussain,
I'm not familiar with the Spark temporary table syntax. Perhaps you can
work around it by using other options, such as the DataFrame.save()
functionality which is documented [1] and unit tested [2].
I suspect what you're encountering is a valid use case. If you could also
file a JIRA
It is not an either or, you can use both - hence the plugin. Phoenix is great
at OLTP type workloads and Spark is better at OLAP and machine learning.
-chris
> On Nov 16, 2016, at 6:56 PM, Cheyenne Forbes
> wrote:
>
> so why would I choose Phoenix over
so why would I choose Phoenix over Spark?
Spark is much, much more than just a way to perform SQL.
-chris
> On Nov 16, 2016, at 7:13 AM, Cheyenne Forbes
> wrote:
>
> Why would/should I care about spark/spark plugin when I already have phoenix?
Why would/should I care about spark/spark plugin when I already have
phoenix?
I am trying to insert into temporary table created on a Spark (v 1.6) DataFrame
loaded using Phoenix-Spark (v 4.4) plugin. Below is the code:
val sc = new SparkContext("local", "phoenix-test")
val configuration = new Configuration()
configuration.set("zookeeper.znode.parent", "/hbase-unsecure")