Hi James,
Thanks for your suggestion. I am a noob, I didn't understand how "you can
try setting the CURRENT_SCN property to Long.MAX_VALUE when you connect."
will help. Will this prevent phoenix from loading metadata about table at
each call? How will I proceed when I actually need to update the
Can there be disastrous consequences of a high value for
UPDATE_CACHE_FREQUENCY? What happens if a node pushes a change when it's got a
stale cache?
On 3 August 2016 16:23:15 James Taylor wrote:
Short of upgrading to 4.7 to leverage the UPDATE_CACHE_FREQUENCY feature,
Short of upgrading to 4.7 to leverage the UPDATE_CACHE_FREQUENCY feature,
you can try setting the CURRENT_SCN property to Long.MAX_VALUE when you
connect. Another alternative would be to set it to one more than the
creation time of your tables. You can control the timestamp your tables are
created
I have 10 region servers, 16 regions in the table. I cannot do batch
upserts. Yes, phoenix is querying metadata for every upsert. 30 ms is not a
bad response, but I wanted to understand if I want to remove these extra
queries, what options I have?
On Wed, Aug 3, 2016 at 3:29 AM, anil gupta
How many nodes you have in cluster? How many regions in that phoenix table?
Can you do batch upserts?
If Phoenix is querying for MetaData for every upsert in a preparedStatement
then it definitely sounds like a bug/performance problem.
IMO, 30 ms is not really that horrible of a performance given
I don't have option to update my CDH5.7. My upsert query is taking 30ms
with one fully covered index on table.
I am using Spring JDBC template which uses prepared statement internally.