Hi Ravi,
thanks for the link but it tells abou an heavy-write scenario, in my case
I'm just using PhoenixInputFormat for reading the data from an existing
table and no other process is using HBase so I think it's not my case.
Why don't you like to recreate a new scan if the old one dies?
Best,
Hi Flavio,
Apparently, the scanner is timing out as the next call from the client to
the RS in the PhoenixInputFormat isn't happening within the stipulated
lease period. One experiment we can try is to have a NO-OP mapper and see
if the exception occurs.
I have a hunch the errors are coming
Hi to all,
when running a mr job on my Phoenix table I get this exception:
Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms passed
since the last invocation, timeout is currently set to 6
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at
Disabling caching will turn off this kind of errors? is that possible?
Or is it equivalent to set *hbase.client.scanner.caching = 1?*
On Mon, Apr 13, 2015 at 10:25 AM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Flavio,
Currently, the default scanner caching value that Phoenix runs with
I tried to set hbase.client.scanner.caching = 1 on both client and server
side and I still get that error :(
On Mon, Apr 13, 2015 at 10:31 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Disabling caching will turn off this kind of errors? is that possible?
Or is it equivalent to set
Hi Flavio,
Currently, the default scanner caching value that Phoenix runs with is
1000. You can give it a try to reduce that number by updating the property
*hbase.client.scanner.caching* in your hbase-site.xml. If you are doing a
lot of processing for each record in your Mapper, you might
Hi,
The parameter that really set the timeout of my count(*) request was :
hbase.regionserver.lease.period .
It should appear in phoenix conf.
From: Billy Watson [mailto:williamrwat...@gmail.com]
Sent: Friday 10 April 2015 18:27
To: user@phoenix.apache.org
Subject: Re: Socket timeout while