[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-08-24 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434489#comment-15434489
 ] 

chenglei commented on PHOENIX-2900:
---

I create a new issue 
[PHOENIX-3199|https://issues.apache.org/jira/browse/PHOENIX-3199] to fix the 
ServerCacheClient's problem.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-08-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433203#comment-15433203
 ] 

ASF GitHub Bot commented on PHOENIX-2900:
-

Github user comnetwork closed the pull request at:

https://github.com/apache/phoenix/pull/180


> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393937#comment-15393937
 ] 

James Taylor commented on PHOENIX-2900:
---

Great, glad to hear everything is working now. There were bugs in 
ScanRanges.intersectScan, so it shouldn't be semantically equivalent before and 
after my fix.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-26 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393485#comment-15393485
 ] 

chenglei commented on PHOENIX-2900:
---

[~jamestaylor], sorry for delayed response. Yes.when the 
ScanRanges.intersectRegion() method invokes the ScanRanges.intersectScan  
method with the crossesRegionBoundary parameter set true,the test is ok,both 
the unit test and in my 25-node cluster.

And I notice another problem,for your reference, if I only restore the 
ScanRanges.intersectScan method to the code before your patch(just as same as 
4.7.0), and change the Scanges.HAS_INTERSECTION field to public access,the 
following  testSaltTableJoin method added to QueryCompilerTest  can pass with 
crossesRegionBoundary parameter both true and false,and If run 
testSaltTableJoin method with newest patch, it can only pass with 
crossesRegionBoundary parameter set true:

{code:borderStyle=solid} 

@Test
public void testSaltTableJoin() throws Exception{

PhoenixConnection conn = 
(PhoenixConnection)DriverManager.getConnection(getUrl());
try {
conn.createStatement().execute("drop table if exists 
SALT_TEST2900");
conn.createStatement().execute(
"create table SALT_TEST2900"+
"("+
"id UNSIGNED_INT not null primary key,"+
"appId VARCHAR"+
")SALT_BUCKETS=2");

conn.createStatement().execute("drop table if exists RIGHT_TEST2900 
");
conn.createStatement().execute(
"create table RIGHT_TEST2900"+
"("+
"appId VARCHAR not null primary key,"+
"createTime VARCHAR"+
")");


String sql="select * from SALT_TEST2900 a inner join RIGHT_TEST2900 
b on a.appId=b.appId where a.id>=3 and a.id<=5";
HashJoinPlan plan = (HashJoinPlan)getQueryPlan(sql, 
Collections.emptyList());
ScanRanges ranges=plan.getContext().getScanRanges();

List regionLocations=

conn.getQueryServices().getAllTableRegions(Bytes.toBytes("SALT_TEST2900"));
for (HRegionLocation regionLocation : regionLocations) {

assertTrue(ranges.intersectScan(null,regionLocation.getRegionInfo().getStartKey(),
regionLocation.getRegionInfo().getEndKey(), 
0,false)==ScanRanges.HAS_INTERSECTION);

assertTrue(ranges.intersectScan(null,regionLocation.getRegionInfo().getStartKey(),
regionLocation.getRegionInfo().getEndKey(), 
0,true)==ScanRanges.HAS_INTERSECTION);
}
} finally {
conn.close();
}
}

{code}

That may means the modified ScanRanges.intersectScan method is not semantically 
equivalent to the method before your modification.


 

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392182#comment-15392182
 ] 

James Taylor commented on PHOENIX-2900:
---

@chenglei - do things look ok now?

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391136#comment-15391136
 ] 

Hudson commented on PHOENIX-2900:
-

SUCCESS: Integrated in Phoenix-master #1338 (See 
[https://builds.apache.org/job/Phoenix-master/1338/])
PHOENIX-2900 Unable to find hash cache once a salted table 's first 
(jamestaylor: rev 96c0f9f7537d218a0848d24965d7dc3ec3140a4c)
* phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* 
phoenix-core/src/test/java/org/apache/phoenix/compile/SaltedScanRangesTest.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java


> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391105#comment-15391105
 ] 

James Taylor commented on PHOENIX-2900:
---

Pushed addendum patch to 4.x and master branches.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-24 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390953#comment-15390953
 ] 

chenglei commented on PHOENIX-2900:
---

[~jamestaylor],did you commit your patch to master?

It seems the patch has a serious bug, even the normal salted table(first region 
not split) Join SQL can not  work.

bg. Your QueryCompilerTest changes won't work - the non IT tests don't spin up 
a mini cluster so they don't have access to any HBase APIs.

The QueryCompilerTest changes use the ConnectionlessQueryServicesImpl to shield 
the HBase APIs. It can work as a  unit test to verify salted table join.

I modify my unit test to match the newest master branch,the test failed, the 
test method added to QueryCompilerTest  is as following:

{code:borderStyle=solid}
@Test
public void testSaltTableJoin() throws Exception{

PhoenixConnection conn = 
(PhoenixConnection)DriverManager.getConnection(getUrl());
try {
conn.createStatement().execute("drop table if exists 
SALT_TEST2900");

conn.createStatement().execute(
"create table SALT_TEST2900"+
"("+
"id UNSIGNED_INT not null primary key,"+
"appId VARCHAR"+
")SALT_BUCKETS=2");



conn.createStatement().execute("drop table if exists RIGHT_TEST2900 
");
conn.createStatement().execute(
"create table RIGHT_TEST2900"+
"("+
"appId VARCHAR not null primary key,"+
"createTime VARCHAR"+
")");


String sql="select * from SALT_TEST2900 a inner join RIGHT_TEST2900 
b on a.appId=b.appId where a.id>=3 and a.id<=5";
HashJoinPlan plan = (HashJoinPlan)getQueryPlan(sql, 
Collections.emptyList());
ScanRanges ranges=plan.getContext().getScanRanges();

List regionLocations=

conn.getQueryServices().getAllTableRegions(Bytes.toBytes("SALT_TEST2900"));
for (HRegionLocation regionLocation : regionLocations) {

assertTrue(ranges.intersectRegion(regionLocation.getRegionInfo().getStartKey(),
regionLocation.getRegionInfo().getEndKey(), false));
}
} finally {
conn.close();
}
}
{code}

I also execute following normal salted table join without splitting in my 
25-node HBase cluster using the newest 4.x-HBase-0.98 branch.it also falled:

{code:borderStyle=solid}
@Test
public void testNormalSaltedTableJoin() throws Exception {
 //1.create LHS  SALT_TEST table
 this.jdbcTemplate.update("drop table if exists SALT_TEST2900");

 this.jdbcTemplate.update(
"create table SALT_TEST2900"+
"("+
"id UNSIGNED_INT not null primary key,"+
"appId VARCHAR"+
")SALT_BUCKETS=2");

this.jdbcTemplate.update("upsert into SALT_TEST2900(id,appId) 
values(1,'app1')");
this.jdbcTemplate.update("upsert into SALT_TEST2900(id,appId) 
values(2,'app2')");
this.jdbcTemplate.update("upsert into SALT_TEST2900(id,appId) 
values(3,'app3')");
this.jdbcTemplate.update("upsert into SALT_TEST2900(id,appId) 
values(4,'app4')");
this.jdbcTemplate.update("upsert into SALT_TEST2900(id,appId) 
values(5,'app5')");
this.jdbcTemplate.update("upsert into SALT_TEST2900(id,appId) 
values(6,'app6')");

//5.create RHS RIGHT_TEST table
this.jdbcTemplate.update("drop table if exists RIGHT_TEST2900 ");
this.jdbcTemplate.update(
"create table RIGHT_TEST2900"+
"("+
"appId VARCHAR not null primary key,"+
"createTime VARCHAR"+
")");

this.jdbcTemplate.update("upsert into RIGHT_TEST2900(appId,createTime) 
values('app2','201601')");
this.jdbcTemplate.update("upsert into RIGHT_TEST2900(appId,createTime) 
values('app3','201602')");
this.jdbcTemplate.update("upsert into RIGHT_TEST2900(appId,createTime) 
values('app4','201603')");
this.jdbcTemplate.update("upsert into RIGHT_TEST2900(appId,createTime) 
values('app5','201604')");

((PhoenixConnection) 
this.dataSource.getConnection()).getQueryServices().clearTableRegionCache(Bytes.toBytes("SALT_TEST2900"));

//7.do join,throw exception
String sql="select * from SALT_TEST2900 a inner join RIGHT_TEST2900 b 
on a.appId=b.appId where a.id>=3 and a.id<=5";
List> result=this.jdbcTemplate.queryForList(sql, new 
Object[0]);
assertTrue(result.size()==3);
}
{code}


I think the problem in your modification in ScanRanges.intersectScan() is : 
yes, you made the comparison to ignore the salted 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390846#comment-15390846
 ] 

Hudson commented on PHOENIX-2900:
-

FAILURE: Integrated in Phoenix-master #1337 (See 
[https://builds.apache.org/job/Phoenix-master/1337/])
PHOENIX-2900 Unable to find hash cache once a salted table 's first 
(jamestaylor: rev 59497d525d94addd9c9faf51ea64329c64149938)
* phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/SaltedIndexIT.java
* phoenix-core/src/test/java/org/apache/phoenix/query/QueryPlanTest.java
* phoenix-core/src/test/java/org/apache/phoenix/compile/ScanRangesTest.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
* 
phoenix-core/src/test/java/org/apache/phoenix/compile/SaltedScanRangesTest.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseViewIT.java


> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch, 
> PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, PHOENIX-2900_v5.patch, 
> PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-23 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390767#comment-15390767
 ] 

Samarth Jain commented on PHOENIX-2900:
---

+1 for your patch, James.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch, 
> PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, PHOENIX-2900_v5.patch, 
> PHOENIX-2900_v6.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390766#comment-15390766
 ] 

James Taylor commented on PHOENIX-2900:
---

Thanks for the validation that the patch fixes the issue, [~comnetwork] and in 
helping to get to the bottom of this - we'll get this into the upcoming 4.8 RC. 
I'll try your ScanRanges fix along with my fixes and see if it still causes the 
unit tests to hang. If it does, I'll make sure the explain plan reflects the 
salt buckets (that can be fixed other ways).

Your QueryCompilerTest changes won't work - the non IT tests don't spin up a 
mini cluster so they don't have access to any HBase APIs.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch, 
> PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, PHOENIX-2900_v5.patch, 
> PHOENIX-2900_v6.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-23 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390580#comment-15390580
 ] 

chenglei commented on PHOENIX-2900:
---

[~jamestaylor], I update the test case,delete the HashJoinSplitTableIT,and just 
add a unit test testSaltTableJoin in QueryCompilerTest class.

Your modification indeedly fix the issue,but I think it is a bit more complex, 
and as you konw, the logic in ScanRanges.intersects()  method is already very  
complex and difficult to understand now.

I made the following patch has another cause:
{code:borderStyle=solid} 
-if (useSkipScanFilter && isSalted && !isPointLookup) {
+if ( isSalted && !isPointLookup) {
{code}

When I execute the "explain select * from SALT_TEST2900 where id>=3 and id<=5" 
sql,which SALT_TEST2900  is a salted table,the output is :
{code:borderStyle=solid} 
 CLIENT PARALLEL 2-WAY RANGE SCAN OVER SALT_TEST2900 [0,3] - [0,5]
 CLIENT MERGE SORT
{code}

When I use explain statement to debug or optimize a SQL,the "SALT_TEST2900 
[0,3] - [0,5]" output string is very confusing,because it ignores the salt 
byte.  When I first look at this,I even doubt about if the salt mechanism had 
take effect.

By the patch, the output is:

{code:borderStyle=solid} 
 CLIENT PARALLEL 2-WAY RANGE SCAN OVER SALT_TEST2900 [0,3] - [1,5]
 CLIENT MERGE SORT
{code}

I think the output "SALT_TEST2900 [0,3] - [1,5]" is more better,it is clear 
that the query using the salt mechanism.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch, 
> PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, PHOENIX-2900_v5.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390420#comment-15390420
 ] 

James Taylor commented on PHOENIX-2900:
---

[~samarthjain] - would you mind reviewing?

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch, 
> PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, PHOENIX-2900_v5.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389842#comment-15389842
 ] 

Hadoop QA commented on PHOENIX-2900:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12819647/PHOENIX-2900_v2.patch
  against master branch at commit 3878f3cbfb31e2058adc32d92593a8743911569e.
  ATTACHMENT ID: 12819647

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/463//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/463//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/463//console

This message is automatically generated.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-22 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389807#comment-15389807
 ] 

Samarth Jain commented on PHOENIX-2900:
---

Unfortunately, [~comnetwork], this is something we won't be able to write an 
automated test for. Our test setup spawns mini-clusters which run in the same 
JVM as the Phoenix client. I would suggest to not include the test class in 
your patch since it isn't able to effectively test the bug you found. 

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389745#comment-15389745
 ] 

James Taylor commented on PHOENIX-2900:
---

Actually, [~comnetwork], hold off on trying that patch - it's not quite right. 
Let me work up a new one.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch, PHOENIX-2900_v2.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-22 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389000#comment-15389000
 ] 

Ankit Singhal commented on PHOENIX-2900:


[~comnetwork]/[~giacomotaylor]/[~samarthjain], should we move this out, if it 
not important for 4.8.0?

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-21 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388805#comment-15388805
 ] 

chenglei commented on PHOENIX-2900:
---

[~jamestaylor], [~samarthjain] , thank you for review,  this issue can only 
reproduce in a distribute cluster,because the ServerCache is JVM scope,I will 
change my test case.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388443#comment-15388443
 ] 

James Taylor commented on PHOENIX-2900:
---

Your test HashJoinSplitTableIT passes without any changes, so it's not 
reproducing the issue. I think the first step is to make sure you have a test 
case that can repro the issue.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388416#comment-15388416
 ] 

James Taylor commented on PHOENIX-2900:
---

The test run doesn't complete with your patch (see console output of test run). 
I confirmed this locally too. Also, the call to keyRanges.intersects() in 
ServerCacheClient.addServerCache() ignores the salt byte, so I'm not sure why 
your change would make any difference:
{code}
 175   if ( ! servers.contains(entry) && 
 176  keyRanges.intersects(regionStartKey, regionEndKey,
 177   cacheUsingTable.getIndexType() == 
IndexType.LOCAL ? 
 178  ScanUtil.getRowKeyOffset(regionStartKey, 
regionEndKey) : 0, true)) { 
{code} 
I think this issue needs a bit more analysis. I'll try to play around with it 
after making the change that Samarth recommended above.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388375#comment-15388375
 ] 

Samarth Jain commented on PHOENIX-2900:
---

[~comnetwork] - it looks like for your newly added test class - 
HashJoinSplitTableIT, you have dead code when running the test suite via mvn 
install:

{code}
+if(NUM_SLAVES_BASE > 1)
+{
+HRegionInfo regionInfo0 =
+
hbaseAdmin.getTableRegions(Bytes.toBytes("SALT_TEST2900")).get(0);
+HRegionInfo regionInfo1 =
+
hbaseAdmin.getTableRegions(Bytes.toBytes("SALT_TEST2900")).get(1);
+ServerName serverName0 =
+
hbaseAdmin.getConnection().locateRegion(regionInfo0.getRegionName())
+.getServerName();
+ServerName serverName1 =
+
hbaseAdmin.getConnection().locateRegion(regionInfo1.getRegionName())
+.getServerName();
+if (serverName0.equals(serverName1)) {
+String regionEncodedName1 = regionInfo1.getEncodedName();
+hbaseAdmin.move(Bytes.toBytes(regionEncodedName1), null);
+while 
(hbaseAdmin.getConnection().locateRegion(regionInfo1.getRegionName())
+.getServerName().equals(serverName1)) {
+Thread.sleep(1000);
+}
+}
+}

Even though you have set NUM_SLAVES_BASE to 2, the two region servers 
serverName0 and serveName1 are essentially the same. You need to have your test 
class extend BaseOwnClusterHBaseManagedTimeIT so that it runs in its own mini 
cluster with 2 slaves running.
{code} 

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-20 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386106#comment-15386106
 ] 

chenglei commented on PHOENIX-2900:
---

It seems the test is ok.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385692#comment-15385692
 ] 

Hadoop QA commented on PHOENIX-2900:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12818905/PHOENIX-2900_v1.patch
  against master branch at commit a6f61cb40c3eb031cd3b8b2192a243709bce37c6.
  ATTACHMENT ID: 12818905

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
35 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_T 
[0,-32768,'" + tenantId + "','" + valuePrefix + "v2-1'] - 
["+(saltBuckets.intValue()-1)+",-32768,'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
+: "CLIENT PARALLEL " + saltBuckets + "-WAY RANGE 
SCAN OVER _IDX_T" + (transactional ? "_TXN" : "") + " [0," + Short.MIN_VALUE + 
",51] - ["+(saltBuckets.intValue()-1)+"," + Short.MIN_VALUE + ",51]\nCLIENT 
MERGE SORT",
+: "CLIENT PARALLEL " + saltBuckets + "-WAY RANGE 
SCAN OVER " + htableName + " [0," + (Short.MIN_VALUE+1) + ",'foo'] - 
["+(saltBuckets.intValue()-1)+"," + (Short.MIN_VALUE+1) + ",'foo']\n"
+long unfreedBytes = 
conn.unwrap(PhoenixConnection.class).getQueryServices().clearCache();
+"select * from SALT_TEST2900 a inner join RIGHT_TEST2900 b 
on a.appId=b.appId where a.id>=3 and a.id<=5";
+("CLIENT PARALLEL 4-WAY RANGE SCAN OVER " + 
TestUtil.DEFAULT_INDEX_TABLE_FULL_NAME + " [0,~'y'] - 
["+(indexSaltBuckets.intValue()-1)+",~'y']\n" + 
+("CLIENT PARALLEL 4-WAY RANGE SCAN OVER " + 
TestUtil.DEFAULT_INDEX_TABLE_FULL_NAME + " [0,*] - 
["+(indexSaltBuckets.intValue()-1)+",~'x']\n"
+"CLIENT PARALLEL 20-WAY RANGE SCAN OVER FOO 
[0,'a',~'2016-01-28 23:59:59.999'] - [19,'a',~'2016-01-28 00:00:00.000']\n" + 
+"CLIENT PARALLEL 20-WAY ROUND ROBIN RANGE SCAN OVER " + 
tableName + " [0,'a',~'2016-01-28 23:59:59.999'] - [19,'a',~'2016-01-28 
00:00:00.000']\n" + 

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/455//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/455//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/455//console

This message is automatically generated.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385385#comment-15385385
 ] 

chenglei commented on PHOENIX-2900:
---

Maybe dependent HBase version is Error? Just from HBase 1.2.0, 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler's dispatch method's return type 
is boolean, before  HBase 1.2.0,  dispatch method's return type is void.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384745#comment-15384745
 ] 

Josh Elser commented on PHOENIX-2900:
-

[~jamestaylor], looks like it did run, but compilation failed (and it chose not 
to comment?)

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-PHOENIX-Build/454/console

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java:[32,8]
 org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler is not abstract and does not 
override abstract method dispatch(org.apache.hadoop.hbase.ipc.CallRunner) in 
org.apache.hadoop.hbase.ipc.RpcScheduler
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java:[84,20]
 dispatch(org.apache.hadoop.hbase.ipc.CallRunner) in 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler cannot override 
dispatch(org.apache.hadoop.hbase.ipc.CallRunner) in 
org.apache.hadoop.hbase.ipc.RpcScheduler
[ERROR] return type boolean is not compatible with void
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java:[88,46]
 incompatible types
[ERROR] required: boolean
{code}

Seems like something happened in the precommit build that made it think this 
was a configuration error and that the job failed (instead of the patched build 
actually failed).

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, PHOENIX-2900_v1.patch, 
> phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> 

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384733#comment-15384733
 ] 

James Taylor commented on PHOENIX-2900:
---

[~elserj] - I can't seem to force a test run for the patch against this JIRA. 
Any ideas what I'm doing wrong?

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, PHOENIX-2900_v1.patch, 
> phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>