[ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215784#comment-14215784
 ] 

James Taylor commented on PHOENIX-1463:
---------------------------------------

In BaseResultIterators.getIterators(), instead of always passing in timeoutMs 
here:
{code}
                        PeekingResultIterator iterator = 
scanPair.getSecond().get(timeoutMs, TimeUnit.MILLISECONDS);
{code}
we should be calculating the max end time outside the loop and subtracting this 
from the System.currentTimeMillis() instead of using timeoutMs (and throwing 
before calling get if the currentTimeOutMs is <= 0).

[~samarthjain] - would you mind looking at this? It's likely we'd want this in 
a 4.2.2 release.

> phoenix.query.timeoutMs doesn't work as expected
> ------------------------------------------------
>
>                 Key: PHOENIX-1463
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.2
>            Reporter: Jan Fernando
>            Assignee: Samarth Jain
>            Priority: Minor
>
> In doing performance testing with Phoenix I noticed that under heavy load we 
> saw queries taking as long as 300 secs even though we had set 
> phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
> when the parent thread waits for all the parallel scans to complete. Each 
> time we call rs.next() and need a to load a new chunk of data from HBase we 
> again run parallel scans with a new 120 sec timeout. Therefore total query 
> time could be timeout * # chunks scanned. I think it would be more intuitive 
> if the query timeout applied to the query as a whole versus resetting for 
> each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to