[ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14216752#comment-14216752
 ] 

Samarth Jain edited comment on PHOENIX-1463 at 11/18/14 8:34 PM:
-----------------------------------------------------------------

[~jamestaylor] -The scans are getting picked up by worker threads in parallel. 
Since we are calling future.get() in the order in which the scans were 
submitted, the approach you are suggesting will end up favoring scans that were 
submitted first (and not necessarily picked up first by worked threads) by 
giving them higher timeouts. I think a better approach would be to initialize a 
CountDownLatch with number of scans that need to be executed. Then each worker 
thread will end up counting down the latch in the call method. The parent 
thread then would need to wait for the duration of queryTimeout for the latch 
to be counted down to 0.  


was (Author: samarthjain):
[~jamestaylor] -The scans are getting picked up by worker threads in parallel. 
Since we are calling future.get() in the order in which the scans were 
submitted, the approach you are suggesting will end up favoring scans that were 
submitted first (and not necessarily picked up first by worked threads) by 
giving them higher timeouts. I think a better approach would be to initialize a 
CountDownLatch with number of scans that need to be executed. Then each worker 
thread will end up counting down the latch in the call method. The parent 
thread then would need to wait for the duration of queryTimeout for the latch 
to be counter down to 0.  

> phoenix.query.timeoutMs doesn't work as expected
> ------------------------------------------------
>
>                 Key: PHOENIX-1463
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.2
>            Reporter: Jan Fernando
>            Assignee: Samarth Jain
>            Priority: Minor
>
> In doing performance testing with Phoenix I noticed that under heavy load we 
> saw queries taking as long as 300 secs even though we had set 
> phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
> when the parent thread waits for all the parallel scans to complete. Each 
> time we call rs.next() and need a to load a new chunk of data from HBase we 
> again run parallel scans with a new 120 sec timeout. Therefore total query 
> time could be timeout * # chunks scanned. I think it would be more intuitive 
> if the query timeout applied to the query as a whole versus resetting for 
> each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to