Hi Flavio, Currently, the default scanner caching value that Phoenix runs with is 1000. You can give it a try to reduce that number by updating the property " *hbase.client.scanner.caching*" in your hbase-site.xml. If you are doing a lot of processing for each record in your Mapper, you might still notice these errors.
Regards Ravi On Mon, Apr 13, 2015 at 12:21 AM, Flavio Pompermaier <pomperma...@okkam.it> wrote: > Hi to all, > > when running a mr job on my Phoenix table I get this exception: > > Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms > passed since the last invocation, timeout is currently set to 60000 > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:52) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:104) > at > org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47) > at > org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:67) > at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764) > at > org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:131) > > This is due to a long interval between two consecutive next() on the scan > results. > However this error is not a problematic one, it just tells the client that > the server has closed that scanner instance so it could be fixed > regenerating a new scan restarting from the last valid key (obviousli on > next() you should track the last valid key if successful). > What do you think? > > Best, > Flavio >