Sunil,

Can you tells us a little bit more about the table -
1) How many regions are there?

2) Do you have phoenix stats enabled?
http://phoenix.apache.org/update_statistics.html

3) Is the table salted?

4) Do you have any overrides for scanner caching (
hbase.client.scanner.caching)  or result size (
hbase.client.scanner.max.result.size) in your hbase-site.xml?

Thanks,
Samarth


On Mon, Aug 24, 2015 at 2:03 PM, Sunil B <[email protected]> wrote:

> Hi,
>
>     Phoenix Version: 4.5.0-Hbase-1.0
>     Client: slqline/JDBC driver
>
>    I have a large table which has around 100GB of data. I am trying to
> execute a simple query "select * from TABLE", which times out with
> scanner timeout exception. Please let me know if there is a way to
> avoid this timeout without changing server side scanner timeout.
>
>   Exception: WARN client.ScannerCallable: Ignore, probably already closed
> org.apache.hadoop.hbase.UnknownScannerException:
> org.apache.hadoop.hbase.UnknownScannerException: Name: 15791, already
> closed?
>
>
>   The reason for the timeout is that phoenix divides this query into
> multiple parallel scans and executes scanner.next on each one of them
> at the start of the query execution (this is because of the use of
> PeekingResultIterator.peek function being called in submitWork
> function of the ParallelIterators class).
>
>   Is there a way I can force Phoenix to do a serial scan instead of
> parallel scan with PeekingResultIterator?
>
> Thanks,
> Sunil
>

Reply via email to