sure, sounds good On Tue, Jul 7, 2015 at 10:57 AM, Maryann Xue <[email protected]> wrote:
> Hi Alex, > > I suspect it's related to using cached region locations that might have > been invalid. A simple way to verify this is try starting a new java > process doing this query and see if the problem goes away. > > > Thanks, > Maryann > > On Mon, Jul 6, 2015 at 10:56 PM, Maryann Xue <[email protected]> > wrote: > >> Thanks a lot for the details, Alex! That might be a bug if it failed only >> on cluster and increasing cache alive time didn't not help. Would you mind >> testing it out for me if I provide a simple patch tomorrow? >> >> >> Thanks, >> Maryann >> >> On Mon, Jul 6, 2015 at 9:09 PM, Alex Kamil <[email protected]> wrote: >> >>> one more thing - the same query (via tenant connection) works in >>> standalone mode but fails on a cluster. >>> I've tried modifying phoenix.coprocessor.maxServerCacheTimeToLiveMs >>> <https://phoenix.apache.org/tuning.html> from the default 30000(ms) to >>> 300000 with no effect >>> >>> On Mon, Jul 6, 2015 at 7:35 PM, Alex Kamil <[email protected]> wrote: >>> >>>> also pls note that it only fails with tenant-specific connections >>>> >>>> On Mon, Jul 6, 2015 at 7:17 PM, Alex Kamil <[email protected]> >>>> wrote: >>>> >>>>> Maryann, >>>>> >>>>> here is the query, I don't see warnings >>>>> SELECT '\''||C.ROWKEY||'\'' AS RK, C.VS FROM test.table1 AS C JOIN >>>>> (SELECT DISTINCT B.ROWKEY, B.VS FROM test.table2 AS B) B ON >>>>> (C.ROWKEY=B.ROWKEY AND C.VS=B.VS) LIMIT 2147483647; >>>>> >>>>> thanks >>>>> Alex >>>>> >>>>> On Fri, Jul 3, 2015 at 10:36 PM, Maryann Xue <[email protected]> >>>>> wrote: >>>>> >>>>>> Hi Alex, >>>>>> >>>>>> Most likely what happened was as suggested by the error message: the >>>>>> cache might have expired. Could you please check if there are any Phoenix >>>>>> warnings in the client log and share your query? >>>>>> >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Maryann >>>>>> >>>>>> On Fri, Jul 3, 2015 at 4:01 PM, Alex Kamil <[email protected]> >>>>>> wrote: >>>>>> >>>>>>> getting this error with phoenix 3.3.0/hbase 0.94.15, any ideas? >>>>>>> >>>>>>> >>>>>>> org.apache.phoenix.exception.PhoenixIOException: >>>>>>> org.apache.phoenix.exception.PhoenixIOException: >>>>>>> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash >>>>>>> cache for joinId: ???Z >>>>>>> ^XI??. The cache might have expired >>>>>>> >>>>>>> and have been removed. >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:96) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:511) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(MergeSortResultIterator.java:48) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:84) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:111) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.LimitingResultIterator.next(LimitingResultIterator.java:47) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44) >>>>>>> >>>>>>> at >>>>>>> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:739) >>>>>>> >>>>>>> at >>>>>>> org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207) >>>>>>> >>>>>>> thanks >>>>>>> Alex >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> >
