Hi Ravi,
thanks for the link but it tells abou an heavy-write scenario, in my case
I'm just using PhoenixInputFormat for reading the data from an existing
table and no other process is using HBase so I think it's not my case.
Why don't you like to recreate a new scan if the old one dies?

Best,
Flavio

On Mon, Apr 13, 2015 at 6:35 PM, Ravi Kiran <maghamraviki...@gmail.com>
wrote:

> Hi Flavio,
>
>    One good blog for reference is
> http://gbif.blogspot.com/2012/07/optimizing-writes-in-hbase.html. Hope it
> helps.
>
> Regards
> Ravi
>
>
>
> On Mon, Apr 13, 2015 at 2:31 AM, Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> I tried to set  hbase.client.scanner.caching = 1 on both client and
>> server side and I still get that error :(
>>
>>
>> On Mon, Apr 13, 2015 at 10:31 AM, Flavio Pompermaier <
>> pomperma...@okkam.it> wrote:
>>
>>> Disabling caching will turn off this kind of errors? is that possible?
>>> Or is it equivalent to set *hbase.client.scanner.caching = 1?*
>>>
>>> On Mon, Apr 13, 2015 at 10:25 AM, Ravi Kiran <maghamraviki...@gmail.com>
>>> wrote:
>>>
>>>> Hi Flavio,
>>>>
>>>>   Currently, the default scanner caching value that Phoenix runs with
>>>> is 1000. You can give it a try to reduce that number by updating the
>>>> property "*hbase.client.scanner.caching*"  in your hbase-site.xml. If
>>>> you are doing a lot of processing for each record in your Mapper,  you
>>>> might still notice these errors.
>>>>
>>>> Regards
>>>> Ravi
>>>>
>>>> On Mon, Apr 13, 2015 at 12:21 AM, Flavio Pompermaier <
>>>> pomperma...@okkam.it> wrote:
>>>>
>>>>> Hi to all,
>>>>>
>>>>> when running a mr job on my Phoenix table I get this exception:
>>>>>
>>>>> Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms
>>>>> passed since the last invocation, timeout is currently set to 60000
>>>>> at
>>>>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>>>>> at
>>>>> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:52)
>>>>> at
>>>>> org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:104)
>>>>> at
>>>>> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
>>>>> at
>>>>> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:67)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
>>>>> at
>>>>> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:131)
>>>>>
>>>>> This is due to a long interval between two consecutive next() on the
>>>>> scan results.
>>>>> However this error is not a problematic one, it just tells the client
>>>>> that the server has closed that scanner instance so it could be fixed
>>>>> regenerating a new scan restarting from the last valid key (obviousli on
>>>>> next() you should track the last valid key if successful).
>>>>> What do you think?
>>>>>
>>>>> Best,
>>>>> Flavio
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to