Hi James,
Thanks for your suggestion. I am a noob, I didn't understand how "you can
try setting the CURRENT_SCN property to Long.MAX_VALUE when you connect."
will help. Will this prevent phoenix from loading metadata about table at
each call? How will I proceed when I actually need to update the
Can someone please take a look?
From: Ramanathan, Kannan: IT (NYK)
Sent: Tuesday, August 02, 2016 15:59
To: user@phoenix.apache.org
Subject: Java Query timeout
Hello All,
We're using Phoenix 4.7 with CDH 5.7.1. The query from Java client is timing
out with this error:
Caused by:
Can there be disastrous consequences of a high value for
UPDATE_CACHE_FREQUENCY? What happens if a node pushes a change when it's got a
stale cache?
On 3 August 2016 16:23:15 James Taylor wrote:
Short of upgrading to 4.7 to leverage the UPDATE_CACHE_FREQUENCY feature,
Short of upgrading to 4.7 to leverage the UPDATE_CACHE_FREQUENCY feature,
you can try setting the CURRENT_SCN property to Long.MAX_VALUE when you
connect. Another alternative would be to set it to one more than the
creation time of your tables. You can control the timestamp your tables are
created
Hi Radha,
This looks to me as if there is an issue in your data somewhere past
the first 100 records. The bulk loader isn't supposed to fail due to
issues like this. Instead, it's intended to simply report the problem
input lines and continue on, but it appears that this isn't happening.
Could
I have 10 region servers, 16 regions in the table. I cannot do batch
upserts. Yes, phoenix is querying metadata for every upsert. 30 ms is not a
bad response, but I wanted to understand if I want to remove these extra
queries, what options I have?
On Wed, Aug 3, 2016 at 3:29 AM, anil gupta