A tip for performance is reusing the same preparedStatement , just
clearParameters() , set values and executeUpdate() over and over again.
Don't close the statement or connections after each upsert. Also, I haven't
seen any noticeable benefit on using jdbc batches as Phoenix controls
batching by
I believe it's related to your client code - In our use case we do easily
15k writes/sec in a cluster lower specced than yours.
Check that your jdbc connection has autocommit off so Phoenix can batch
writes and that table has a reasonable UPDATE_CACHE_FREQUENCY ( more than
6 ).
On Thu, 12
But Phoenix is extremely slow, I am getting 3000 to 6000 transactions per
minute.
--
Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/
Thanks a lot for your help.
Our test is inserting new rows individually. For our use case, we are
benchmarking that we could be able to get 10,000 new rows in a minute, using
a cluster of writers if needed.
When executing the inserts with Phoenix API (UPSERT) we have been able to
get up to 6,000
HBase must grab a lock for the row which is being updated. Normally, for
a batch of updates sent to a region server, the RS will grab as many row
locks as it can at once. If you only send one row to update at a time,
you obviously get no amortization.
It's just the normal semantics of
Phoenix does not recommend connection pooling because Phoenix
Connections are not expensive to create as most DB connections are.
The first connection you make from a JVM is expensive. Every subsequent
one is cheap.
On 7/11/18 2:55 PM, alchemist wrote:
Since Phoenix does not recommend
I only use Phoenix API(JDBC API) to access HBASE If you want to use
secondary indexes.
Yun Zhang
Best regards!
2018-07-12 20:08 GMT+08:00 alchemist :
> I tried using Phoenix JDBC API to access data in a remote EMR server from
> another EC2
I tried using Phoenix JDBC API to access data in a remote EMR server from
another EC2 machine. I tried multithreading the program but it is not
scaling.I am getting 1 transaction per second. This seems extremely slow.
So I thought If I can use coprocessor written for Secondary Index by Phoenix