You are welcome, Vijay!
On Thu, Oct 9, 2014 at 12:48 AM, G.S.Vijay Raajaa
wrote:
> Modifying phoenix.coprocessor.maxServerCacheTimeToLiveMs parameter which
> defaults to *30,000 *solved the problem.
>
> *Thanks !!*
>
> On Wed, Oct 8, 2014 at 10:25 AM, G.S.Vijay Raajaa > wrote:
>
>> Hi Maryann,
Modifying phoenix.coprocessor.maxServerCacheTimeToLiveMs parameter which
defaults to *30,000 *solved the problem.
*Thanks !!*
On Wed, Oct 8, 2014 at 10:25 AM, G.S.Vijay Raajaa
wrote:
> Hi Maryann,
>
> Its the same query:
>
> *select c.c_first_name, ca.ca_city, cd.cd_education_s
Hi Maryann,
Its the same query:
*select c.c_first_name, ca.ca_city, cd.cd_education_status from
CUSTOMER_3 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
ca.ca_address_sk group by ca.ca_city, cd.cd
Hi Ashish,
The warning you got was exactly showing the reason why you finally got that
error: one of the join table query had taken too long so that the cache for
other join tables expired and got invalidated. Again, could you please
share your query and the size of the tables used in your query?
Maryann,
hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for the
help. I appreciate it.
~Ashish
On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue wrote:
> Hi Ashish,
>
> The "phoenix.query.maxServerCacheBytes" is a client parameter while the
> other two are server parameters. Bu
Hi Maryann,
After executing the same query after increasing the heap
space in region server, I get a strange error:
./psql.py 10.10.5.55 test.sql
14/10/07 02:59:52 WARN execute.HashJoinPlan: Hash plan [0] execution seems
too slow. Earlier hash cache(s) might have expired on serve
Hi Ashish,
The "phoenix.query.maxServerCacheBytes" is a client parameter while the
other two are server parameters. But it looks like the configuration change
did not take effect at your client side. Could you please make sure that
this is the only configuration that goes to the CLASSPATH of your
Here it is,
java.sql.SQLException: Encountered exception in hash plan [1] execution.
at
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
at
org.apache.phoenix.jdbc.PhoenixSt
Hi Ashish,
Could you please let us see your error message?
Thanks,
Maryann
On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya
wrote:
> Hey Maryann,
>
> Thanks for your input. I tried both the properties but no luck.
>
> ~Ashish
>
> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue
> wrote:
>
>> Hi
Hey Maryann,
Thanks for your input. I tried both the properties but no luck.
~Ashish
On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue wrote:
> Hi Ashish,
>
> The global cache size is set to either "
> *phoenix.query.maxGlobalMemorySize*" or
> "phoenix.query.maxGlobalMemoryPercentage
> * heapSize"
Hi Ashish,
The global cache size is set to either "*phoenix.query.maxGlobalMemorySize*"
or "phoenix.query.maxGlobalMemoryPercentage * heapSize" (Sorry about the
mistake I made earlier). The ""phoenix.query.maxServerCacheBytes" is a
client parameter and is most likely NOT the thing you should worry
I have tried that as well...but "phoenix.query.maxServerCacheBytes" remains
the default value of 100 MB. I get to see it when join fails.
Thanks,
~Ashish
On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue wrote:
> Hi Ashish,
>
> The global cache size is set to either "phoenix.query.maxServerCacheByte
Hi Ashish,
The global cache size is set to either "phoenix.query.maxServerCacheBytes"
or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
*smaller*. You can try setting "phoenix.query.maxGlobalMemoryPercentage"
instead, which is recommended, and see how it goes.
Thanks,
Maryann
Hi Maryann,
I am having the same issue where star join is failing with
MaxServerCacheSizeExceededException.
I set phoenix.query.maxServerCacheBytes to 1 GB both in client and server
hbase-site.xml's. However, it does not take effect.
Phoenix 3.1
HBase .94
Thanks,
~Ashish
On Fri, Sep 26, 2014 at
Yes, you should make your modification on each region server, since this is
a server-side configuration.
On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa
wrote:
> Hi Xue,
>
> Thanks for replying. I did modify the hbase-site.xml by
> increasing the default value of phoenix.query.maxGl
Hi Xue,
Thanks for replying. I did modify the hbase-site.xml by
increasing the default value of phoenix.query.maxGlobalMemoryPercentage .
Also increased the Region server heap space memory . The
change didn't get reflected and I still get the error with an indication
that "global pool of
Hi Vijay,
I think here the query plan is scanning table *CUSTOMER_3 *while
joining the other two tables at the same time, which means the region
server memory for Phoenix should be large enough to hold 2 tables together
and you also need to expect some memory expansion for java objects.
Do yo
Hi,
I am trying to do a join of three tables usng the following query:
*select c.c_first_name, ca.ca_city, cd.cd_education_status from
CUSTOMER_3 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
ca.ca_address_sk
18 matches
Mail list logo