Hi Josh,
Thanks for the info. Views do work, and that helps with some of our use
cases anyway.
I was hoping to use Spark as a means of running a long-running query
without overly taxing everything, but since both the Spark/MapReduce
integrations use the same underlying JDBC connections (as far
Hi Craig,
I think this is an open issue in PHOENIX-2648 (
https://issues.apache.org/jira/browse/PHOENIX-2648)
There seems to be a workaround by using a 'VIEW' instead, as mentioned in
that ticket.
Good luck,
Josh
On Thu, Feb 23, 2017 at 11:56 PM, Craig Roberts
Hi all,
I've got a (very) basic Spark application in Python that selects some basic
information from my Phoenix table. I can't quite figure out how (or even if
I can) select dynamic columns through this, however.
Here's what I have;
from pyspark import SparkContext, SparkConf
from pyspark.sql