Hi all,

I'm getting this strange problem with Phoenix 4.7 and HBase 1.0.
Let's say I write a spark dataframe with some millions of rows in a HBase
table with Phoenix. Then I remove many of them, let's say one half, from a
spark job that uses PhoenixConnection or using a DB client (DBeaver). After
that I can read the remaining rows with the DB client but my application cannot
do it anymore. It remains in pending for hours without response.
I made different tries. The problem happens only after a spark job delete a
lot or all the rows.
Do you have any idea about this problem?

Thanks a lot,
Paolo

Reply via email to