[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16085620#comment-16085620
 ] 

Hadoop QA commented on PHOENIX-4010:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12876644/PHOENIX-4010.patch
  against master branch at commit f2eac858eab64fda3eacf7f6e1b2ab9656bf4cfa.
  ATTACHMENT ID: 12876644

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
52 warning messages.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
    +    private static final Map<Integer, TableRef> cacheUsingTableRefMap = 
new ConcurrentHashMap<Integer, TableRef>();
+                            return addServerCache(htable, key, 
cacheUsingTable, cacheId, cachePtr, cacheFactory, txState);
+             * Allow for the possibility that the region we based where to 
send our cache has split and been relocated
+             * to another region server *after* we sent it, but before we 
removed it. To accommodate this, we iterate
+             * through the current metadata boundaries and remove the cache 
once for each server that we originally sent
+                LOG.debug(addCustomAnnotations("Removing Cache " + cacheId + " 
from servers.", connection));
+                        || (keys != null && 
ByteUtil.contains(keys,entry.getRegionInfo().getStartKey()))) { 
+                                    public RemoveServerCacheResponse 
call(ServerCachingService instance)
+                                        
BlockingRpcCallback<RemoveServerCacheResponse> rpcCallback = new 
BlockingRpcCallback<RemoveServerCacheResponse>();
+                                        RemoveServerCacheRequest.Builder 
builder = RemoveServerCacheRequest

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.GroupByIT

     {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):       
at 
org.apache.cxf.transport.jms.RequestResponseTest.testRequestTopicResponseStaticQueue(RequestResponseTest.java:90)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1207//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1207//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1207//console

This message is automatically generated.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> ----------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-4010
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4010
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>             Fix For: 4.12.0
>
>         Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
>         while (true) {
>             try {
>                 // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
>                 // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
>                 // to this implementation class, so it's probably not worth 
> it.
>                 List<HRegionLocation> locations = Lists.newArrayList();
>                 byte[] currentKey = HConstants.EMPTY_START_ROW;
>                 do {
>                     HRegionLocation regionLocation = 
> connection.getRegionLocation(
>                             TableName.valueOf(tableName), currentKey, reload);
>                     locations.add(regionLocation);
>                     currentKey = regionLocation.getRegionInfo().getEndKey();
>                 } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
>                 return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List<HRegionLocation> locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
>             int nRegions = locations.size();
>             
> .....
>  if ( ! servers.contains(entry) && 
>                         keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
>                                 cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
>                     // Call RPC once per server
>                     servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to