How many nodes you have in cluster? How many regions in that phoenix table?
Can you do batch upserts?
If Phoenix is querying for MetaData for every upsert in a preparedStatement
then it definitely sounds like a bug/performance problem.
IMO, 30 ms is not really that horrible of a performance given
Hello All,
We're using Phoenix 4.7 with CDH 5.7.1. The query from Java client is timing
out with this error:
Caused by: java.net.SocketTimeoutException: callTimeout=6,
callDuration=60306: row '' on table 'PHOENIX_TABLE' at region=
I don't have option to update my CDH5.7. My upsert query is taking 30ms
with one fully covered index on table.
I am using Spring JDBC template which uses prepared statement internally.
As for large tables my phoenix client is giving timeout, I was trying to create
phoenix table using spark but it is also giving error.
Below is another link asked in stackoverflow for the same:
http://stackoverflow.com/questions/38498096/create-table-in-phoenix-from-spark