Your real-world situation is not a single-threaded application, is it? You will have multiple threads which are all updating Phoenix concurrently.

Given the semantics that your application needs from the requirements you stated, I'm not sure what else you can do differently. You can get low-latency out of HBase, but that's at the cost of throughput (not a unique characteristic of HBase).

Denormalizing your tables will reduce the amount of work each update will have to execute. Every secondary index is another update that needs to be executed to satisfy your UPSERT.

Updates in HBase go in memory and to the WAL. New updates to HBase are blocked when the memstore fills up and needs to flush to disk. Thus, it is optimal to keep flush-times short so that you don't have many threads blocked. However, you are still fighting yourself when you have threads all trying to grab the same lock to write their data.

You can also try reaching out to your vendor (EMR) to see what other tunings they recommend. I don't know what this architecture looks like.

On 7/11/18 11:33 AM, alchemist wrote:
Thanks so much Josh!  I am unable to understand why performance is extremely
slow.

1.  If I perform update using PreparedStatement addBatch and executeBatch
then I get nearly 6000 transactions per minute.

2.  But in our case we need to save each transaction so cannot perform
update batch,  so I am using PreparedStatement executeQuery and commit()
getting nearly 100 transactions per minute.

These numbers seems extremely slow,  therefore I am wondering I am doing
something very incorrect.



--
Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/

Reply via email to