Thanks. We topped with the next problem, we do need to do appending. But
current support documentation says only Overwrite mode is available right?
In this case we'll have to resort back to RDD writing, correct?
On Mon, Sep 17, 2018 at 8:45 PM Josh Elser wrote:
> As I said earlier, the
As I said earlier, the expectation is that you use the
phoenix-client.jar and phoenix-spark2.jar for the phoenix-spark
integration with spark2.
You do not need to reference all of these jars by hand. We create the
jars with all of the necessary dependencies bundled to specifically
avoid
Thanks for the patience, sorry maybe I sent incomplete information. We are
loading the following jars and still getting: *executor 1):
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.phoenix.query.QueryServicesOptions*
Please retain the mailing list in your replies.
On 9/17/18 2:32 PM, Saif Addin wrote:
Thanks for the patience, sorry I sent incomplete information. We are
loading the following jars and still getting: */executor 1):
java.lang.NoClassDefFoundError: Could not initialize class
Hi, I am attempting to make connection with Spark but no success so far.
For writing into Phoenix, I am trying this:
tdd.toDF("ID", "COL1", "COL2",
"COL3").write.format("org.apache.phoenix.spark").option("zkUrl",
"zookeper-host-url:2181").option("table",
htablename).mode("overwrite").save()
But
Pretty sure we ran tests with Spark 2.3 with Phoenix 5.0. Not sure if
Spark has already moved beyond that.
On 9/12/18 11:00 PM, Saif Addin wrote:
Thanks, we'll try Spark Connector then. Thought it didn't support newest
Spark Versions
On Wed, Sep 12, 2018 at 11:03 PM Jaanai Zhang
Thanks, we'll try Spark Connector then. Thought it didn't support newest
Spark Versions
On Wed, Sep 12, 2018 at 11:03 PM Jaanai Zhang
wrote:
> It seems columns data missing mapping information of the schema. if you
> want to use this way to write HBase table, you can create an HBase table
>
It seems columns data missing mapping information of the schema. if you
want to use this way to write HBase table, you can create an HBase table
and uses Phoenix mapping it.
Jaanai Zhang
Best regards!
Thomas D'Silva 于2018年9月13日周四 上午6:03写道:
> Is
Is there a reason you didn't use the spark-connector to serialize your data?
On Wed, Sep 12, 2018 at 2:28 PM, Saif Addin wrote:
> Thank you Josh! That was helpful. Indeed, there was a salt bucket on the
> table, and the key-column now shows correctly.
>
> However, the problem still persists in
Thank you Josh! That was helpful. Indeed, there was a salt bucket on the
table, and the key-column now shows correctly.
However, the problem still persists in that the rest of the columns show as
completely empty on Phoenix (appear correctly on Hbase). We'll be looking
into this but if you have
Reminder: Using Phoenix internals forces you to understand exactly how
the version of Phoenix that you're using serializes data. Is there a
reason you're not using SQL to interact with Phoenix?
Sounds to me that Phoenix is expecting more data at the head of your
rowkey. Maybe a salt bucket
11 matches
Mail list logo