Pretty sure we ran tests with Spark 2.3 with Phoenix 5.0. Not sure if
Spark has already moved beyond that.
On 9/12/18 11:00 PM, Saif Addin wrote:
Thanks, we'll try Spark Connector then. Thought it didn't support newest
Spark Versions
On Wed, Sep 12, 2018 at 11:03 PM Jaanai Zhang
This seems similar to a failure scenario I’ve seen a couple times. I believe
after multiple restarts you got lucky and tables were brought up by Hbase in
the correct order.
What happens is some kind of semi-catastrophic failure where 1 or more region
servers go down with edits that weren’t
Sorry, I don't understander your purpose. According to your proposal, it
seems that can't achieve. You need a hash partition, However, Some things
need to clarify that HBase is a range partition engine and the salt buckets
were used to avoid hotspot, in other words, HBase as a storage engine
For the usage example that you provided when you write data how does the
values of id_1, id_2 and other_key vary?
I assume id_1 and id_2 remain the same while other_key is monotonically
increasing, and thats why the table is salted.
If you create the salt bucket only on id_2 then wouldn't you run
I think i found the issue :
I had the tables created in Phoenix 4.6 which did not have Column name
encoding feature(https://issues.apache.org/jira/browse/PHOENIX-1598). And
then now I have to move to phoenix 4.14 directly.
As far as I know Phoenix handles the upgrade for Column names encoding
Hi folks,
Any thoughts or feedback on this?
Thanks,
Gerald
On Mon, Sep 10, 2018 at 1:56 PM, Gerald Sangudi
wrote:
> Hello folks,
>
> We have a requirement for salting based on partial, rather than full,
> rowkeys. My colleague Mike Polcari has identified the requirement and
> proposed an
Did you check system.stats,. If it us empty, needs to be rebuilt by running
major compact on hbasr
On Tue, Sep 11, 2018, 11:33 AM Tanvi Bhandari
wrote:
> Hi,
>
>
>
> I am trying to upgrade the phoenix binaries in my setup from phoenix-4.6
> (had optional concept of schema) to phoenix-4.14