[ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17824566#comment-17824566
 ] 

ASF GitHub Bot commented on PHOENIX-7253:
-----------------------------------------

tkhurana commented on code in PR #1848:
URL: https://github.com/apache/phoenix/pull/1848#discussion_r1516942676


##########
phoenix-core-client/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java:
##########
@@ -1005,6 +1005,43 @@ private ScansWithRegionLocations getParallelScans(byte[] 
startKey, byte[] stopKe
         
         int regionIndex = 0;
         int startRegionIndex = 0;
+
+        List<HRegionLocation> regionLocations;
+        if (isSalted && !isLocalIndex) {
+            // key prefix = salt num + view index id + tenant id
+            // If salting is used with tenant or view index id, scan start and 
end
+            // rowkeys will not be empty. We need to generate region locations 
for
+            // all the scan range such that we cover (each salt bucket num) + 
(prefix starting from
+            // index position 1 to cover view index and/or tenant id and/or 
remaining prefix).
+            if (scan.getStartRow().length > 0 && scan.getStopRow().length > 0) 
{
+                regionLocations = new ArrayList<>();
+                for (int i = 0; i < getTable().getBucketNum(); i++) {
+                    byte[] saltStartRegionKey = new 
byte[scan.getStartRow().length];
+                    saltStartRegionKey[0] = (byte) i;
+                    System.arraycopy(scan.getStartRow(), 1, 
saltStartRegionKey, 1,
+                        scan.getStartRow().length - 1);
+
+                    byte[] saltStopRegionKey = new 
byte[scan.getStopRow().length];
+                    saltStopRegionKey[0] = (byte) i;
+                    System.arraycopy(scan.getStopRow(), 1, saltStopRegionKey, 
1,
+                        scan.getStopRow().length - 1);
+
+                    regionLocations.addAll(
+                        getRegionBoundaries(scanGrouper, saltStartRegionKey, 
saltStopRegionKey));
+                }

Review Comment:
   Is all this necessary ? The variable `traverseAllRegions`  is set to true 
for salted tables so what are we gaining with this extra work 





> Perf improvement for non-full scan queries on large table
> ---------------------------------------------------------
>
>                 Key: PHOENIX-7253
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-7253
>             Project: Phoenix
>          Issue Type: Improvement
>    Affects Versions: 5.2.0, 5.1.3
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Critical
>             Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.<init>(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.<init>(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:851)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:730)
>         at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:766)
>         ... 254 more
> Caused by: java.lang.InterruptedException
>         at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:982)
>         at 
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1288)
>         at 
> java.base/java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:424)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1117)
>         ... 264 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to