Hi all,
We're trying to write tables with all string columns from spark.
We are not using the Spark Connector, instead we are directly writing byte
arrays from RDDs.
The process works fine, and Hbase receives the data correctly, and content
is consistent.
However reading the table from Phoenix,
Reminder: Using Phoenix internals forces you to understand exactly how
the version of Phoenix that you're using serializes data. Is there a
reason you're not using SQL to interact with Phoenix?
Sounds to me that Phoenix is expecting more data at the head of your
rowkey. Maybe a salt bucket
can you attach the schema of your table? and the explain plan for select *
from mytable?
On Tue, Sep 11, 2018 at 10:24 PM, Tanvi Bhandari
wrote:
> " mapped hbase tables to phoenix and created them explicitly from phoenix
> sqlline client. I first created schema corresponding to namespace and
Is there a reason you didn't use the spark-connector to serialize your data?
On Wed, Sep 12, 2018 at 2:28 PM, Saif Addin wrote:
> Thank you Josh! That was helpful. Indeed, there was a salt bucket on the
> table, and the key-column now shows correctly.
>
> However, the problem still persists in
Thank you Josh! That was helpful. Indeed, there was a salt bucket on the
table, and the key-column now shows correctly.
However, the problem still persists in that the rest of the columns show as
completely empty on Phoenix (appear correctly on Hbase). We'll be looking
into this but if you have
It seems columns data missing mapping information of the schema. if you
want to use this way to write HBase table, you can create an HBase table
and uses Phoenix mapping it.
Jaanai Zhang
Best regards!
Thomas D'Silva 于2018年9月13日周四 上午6:03写道:
> Is
Thanks, we'll try Spark Connector then. Thought it didn't support newest
Spark Versions
On Wed, Sep 12, 2018 at 11:03 PM Jaanai Zhang
wrote:
> It seems columns data missing mapping information of the schema. if you
> want to use this way to write HBase table, you can create an HBase table
>