0: jdbc:phoenix:master> select count(1) from STORE_SALES;
+--+
| COUNT(1) |
+--+
java.lang.RuntimeException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.
I've found a (rather perplexing) partial solution.
If I leave the spark.driver.extraClassPath out completely, and instead
do "spark-shell --jars
/usr/hdp/current/phoenix-client/phoenix-client.jar", it seems to work
perfectly! Note that the jar there is the phoenix-client.jar as
shipped with HDP (i
For HDP 2.4.2 this is what we ended up with to get it to work:
/usr/hdp/2.4.2.0-258/phoenix/lib/phoenix-core-4.4.0.2.4.2.0-258.jar
/usr/hdp/2.4.2.0-258/phoenix/lib/phoenix-spark-4.4.0.2.4.2.0-258.jar
/usr/hdp/2.4.2.0-258/phoenix/lib/hbase-client.jar
/usr/hdp/2.4.2.0-258/phoenix/lib/hbase-common.ja
Robert,
you should use the phoenix-4*-spark.jar that is located in root phoenix
directory.
Thanks,
Sergey
On Tue, Jul 5, 2016 at 8:06 AM, Josh Elser wrote:
> Looking into this on the HDP side. Please feel free to reach out via HDP
> channels instead of Apache channels.
>
> Thanks for letting u
Hi Vamsi,
The DataFrame has an underlying number of partitions associated with it,
which will be processed by however many workers you have in your Spark
cluster.
You can check the number of partitions with:
df.rdd.partitions.size
And you can alter the partitions using:
df.repartition(numPartiti
Thanks Rajeshbabu.
On Tue, Jul 5, 2016 at 5:59 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:
> Hi Vamsi,
>
> There is a bug with local indexes in 4.4.0 which is fixed in 4.7.0
> https://issues.apache.org/jira/browse/PHOENIX-2334
>
> Thanks,
> Rajeshbabu.
>
> On Tue, Jul 5, 2016 at 6
Team,
In Phoenix-Spark plugin is DataFrame save operation single threaded?
df.write \
.format("org.apache.phoenix.spark") \
.mode("overwrite") \
.option("table", "TABLE1") \
.option("zkUrl", "localhost:2181") \
.save()
Thanks,
Vamsi Attluri
--
Vamsi Attluri
Looking into this on the HDP side. Please feel free to reach out via HDP
channels instead of Apache channels.
Thanks for letting us know as well.
Josh Mahonin wrote:
Hi Robert,
I recommend following up with HDP on this issue.
The underlying problem is that the 'phoenix-spark-4.4.0.2.4.0.0-16
Hi Robert,
I recommend following up with HDP on this issue.
The underlying problem is that the 'phoenix-spark-4.4.0.2.4.0.0-169.jar'
they've provided isn't actually a fat client JAR, it's missing many of the
required dependencies. They might be able to provide the correct JAR for
you, but you'd h
Hi Vamsi,
There is a bug with local indexes in 4.4.0 which is fixed in 4.7.0
https://issues.apache.org/jira/browse/PHOENIX-2334
Thanks,
Rajeshbabu.
On Tue, Jul 5, 2016 at 6:21 PM, Vamsi Krishna
wrote:
> Team,
>
> I'm working on HDP 2.3.2 (Phoenix 4.4.0, HBase 1.1.2).
> When I use '-it' option
Team,
I'm working on HDP 2.3.2 (Phoenix 4.4.0, HBase 1.1.2).
When I use '-it' option of CsvBulkLoadTool neither Acutal Table nor Local
Index Table is loaded.
*Command:*
*HADOOP_CLASSPATH=/usr/hdp/current/hbase-master/lib/hbase-protocol.jar:/etc/hbase/conf
yarn jar /usr/hdp/current/phoenix-client/p
11 matches
Mail list logo