Re: ScanningResultIterator resiliency

2015-04-13 Thread Flavio Pompermaier
Hi Ravi, thanks for the link but it tells abou an heavy-write scenario, in my case I'm just using PhoenixInputFormat for reading the data from an existing table and no other process is using HBase so I think it's not my case. Why don't you like to recreate a new scan if the old one dies? Best,

Re: ScanningResultIterator resiliency

2015-04-13 Thread Ravi Kiran
Hi Flavio, Apparently, the scanner is timing out as the next call from the client to the RS in the PhoenixInputFormat isn't happening within the stipulated lease period. One experiment we can try is to have a NO-OP mapper and see if the exception occurs. I have a hunch the errors are coming

ScanningResultIterator resiliency

2015-04-13 Thread Flavio Pompermaier
Hi to all, when running a mr job on my Phoenix table I get this exception: Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms passed since the last invocation, timeout is currently set to 6 at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) at

Re: ScanningResultIterator resiliency

2015-04-13 Thread Flavio Pompermaier
Disabling caching will turn off this kind of errors? is that possible? Or is it equivalent to set *hbase.client.scanner.caching = 1?* On Mon, Apr 13, 2015 at 10:25 AM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Flavio, Currently, the default scanner caching value that Phoenix runs with

Re: ScanningResultIterator resiliency

2015-04-13 Thread Flavio Pompermaier
I tried to set hbase.client.scanner.caching = 1 on both client and server side and I still get that error :( On Mon, Apr 13, 2015 at 10:31 AM, Flavio Pompermaier pomperma...@okkam.it wrote: Disabling caching will turn off this kind of errors? is that possible? Or is it equivalent to set

Re: ScanningResultIterator resiliency

2015-04-13 Thread Ravi Kiran
Hi Flavio, Currently, the default scanner caching value that Phoenix runs with is 1000. You can give it a try to reduce that number by updating the property *hbase.client.scanner.caching* in your hbase-site.xml. If you are doing a lot of processing for each record in your Mapper, you might

RE: Socket timeout while counting number of rows of a table

2015-04-13 Thread PERNOLLET Martin
Hi, The parameter that really set the timeout of my count(*) request was : hbase.regionserver.lease.period . It should appear in phoenix conf. From: Billy Watson [mailto:williamrwat...@gmail.com] Sent: Friday 10 April 2015 18:27 To: user@phoenix.apache.org Subject: Re: Socket timeout while