[ 
https://issues.apache.org/jira/browse/PHOENIX-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319812#comment-14319812
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1634:
--------------------------------------------------

[~apurtell]
bq. There is enough time for the split regions to migrate and then want to 
split again, implying time enough for a bunch of writes to accumulate passes 
between when regions of the primary table are moved and when index regions are 
finally relocated.
If data region already moved and index region is in the middle of assignment 
then we will retry to write to index table because when there is no colocation 
we are writing to index table through HTable(which internally retry on 
NotServerRegionException).
bq. Otherwise we wouldn't be seeing the split failures, right? 
Yes.
bq. What would happen with index queries during that time? 
Index queries directly go through index table so if any assignments in the 
progress scan retry will happen on NotServerRegionException.
bq. but if colocation is required for correct answers, and colocation isn't 
enforced right away, then are there are periods of time when queries using the 
local index will return incorrect results?
Colocation is not mandatory to give correct results. If no colocation we query 
through HTable which retry and give the proper results.
Only thing is there will be RPC overhead if no colocation.
bq. In other words, shouldn't the primary region and all relevant index regions 
be moved as near in time as possible if not atomically?  If not why not.
During balance they should be moved as near in time. Currently when data/index 
region moved explicitly then corresponding region won't be moved until unless 
move is called explicitly. But next time when balancer runs the regions not 
co-located will be automatically  co-located. I have raised PHOENIX-1658 to 
handle this.
Thanks.


> LocalIndexSplitter prevents region from auto split
> --------------------------------------------------
>
>                 Key: PHOENIX-1634
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1634
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.0.0, 4.3
>            Reporter: Mujtaba Chohan
>            Assignee: Rajeshbabu Chintaguntla
>         Attachments: PHOENIX-1634.patch, logs.zip, performance.py
>
>
> Local index is *not* created for a multi-tenant table however empty HBase 
> table is created in advance for local index. With data upserted in the 
> multi-tenant table, after multiple successive auto-splits when region tries 
> to split again on another region server, LocalIndexSplitter prevents 
> auto-split from happening. [~rajesh23] Please see the log below. Thanks 
> [~apurtell] and [~jamestaylor] for narrowing down this issue.
> {code}
> WARN org.apache.hadoop.hbase.regionserver.LocalIndexSplitter: Index region 
> corresponindg to data region 
> MYSCHEMA.MY_MULTITENANT_TABLE,,1422663910075.db3861e02b58e21b5383704375539ee5.
>  not in the same server. So skipping the split.
> 2015-01-31 04:48:53,532 INFO 
> org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
> of failed split of 
> MYSCHEMA.MY_MULTITENANT_TABLE,,1422663910075.db3861e02b58e21b5383704375539ee5.;
>  Coprocessor bypassing region 
> MYSCHEMA.MY_MULTITENANT_TABLE,,1422663910075.db3861e02b58e21b5383704375539ee5.
>  split.
> java.io.IOException: Coprocessor bypassing region 
> MYSCHEMA.MY_MULTITENANT_TABLE,,1422663910075.db3861e02b58e21b5383704375539ee5.
>  split.
> at 
> org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:309)
> at 
> org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:655)
> at org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to