Re: Hash aggregation

2018-05-22 Thread Maryann Xue
s not to, or should I file a ticket for this? >> >> Thanks, >> Gerald >> >> On Fri, May 18, 2018 at 11:29 AM, Maryann Xue <maryann@gmail.com> >> wrote: >> >>> Hi Gerald, >>> >>> Phoenix does have hash aggregation. The reason

Re: Hash aggregation

2018-05-18 Thread Maryann Xue
Hi Gerald, Phoenix does have hash aggregation. The reason why sort-based aggregation is used in your query plan is that the aggregation happens on the client side. And that is because sort-merge join is used (as hinted) which is a client driven join, and after that join stage all operations can

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-17 Thread Maryann Xue
rue(rs.next()); //assertEquals(2, rs.getInt(1)); assertFalse(rs.next()); } } On Thu, May 17, 2018 at 3:48 PM, Gerald Sangudi <gsang...@23andme.com> wrote: > Hi Maryann, > > Would you mind sharing the EXPLAIN output you are getting? > > I'm not able to find a download for

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-16 Thread Maryann Xue
Hi Gerald, I checked again. Unfortunately this fix is included with 4.13.2 but not 4.13.1. Would you mind upgrading your library to 4.13.2? Thanks, Maryann On Wed, May 16, 2018 at 9:41 AM, Maryann Xue <maryann@gmail.com> wrote: > Sorry for the late response. Yes, sure, I will try

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-16 Thread Maryann Xue
currently provides Phoenix 4.13.0. I manually >> upgraded an AWS EMR cluster to Phoenix 4.13.1 and rebooted the cluster. The >> EXPLAIN plan still shows the same issue. >> >> Thanks, >> Gerald >> >> On Wed, May 9, 2018 at 11:51 AM, Maryann Xue <maryan

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-09 Thread Maryann Xue
Hi Gerald, I have verified against latest Phoenix code that this problem has been fixed. I have also checked Phoenix 4.13 release tags. Looks like all versions of 4.13 packages now include that fix. Would you mind getting the latest Phoenix-4.13 package and testing it again? Thank you! Thanks,

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-08 Thread Maryann Xue
Hi Gerald, Thank you for finding this issue! I think it is similar to PHOENIX-4508. I'll verify your case on the latest Phoenix branch and see if it has been fixed. Thanks, Maryann On Tue, May 8, 2018 at 12:24 PM, Gerald Sangudi wrote: > Hello, > > I'm running Phoenix

Re: SELECT + ORDER BY vs self-join

2017-10-31 Thread Maryann Xue
I suspect this problem is similar to PHOENIX-4288. On Mon, Oct 30, 2017 at 11:26 PM James Taylor wrote: > Please file a JIRA and include the explain plan for each of the queries. I > suspect your index is not being used in the first query due to the > selection of all the

Re: Phoenix query performance

2017-02-22 Thread Maryann Xue
Hi Pradheep, Thank you for the answers! Please see my response inline. On Wed, Feb 22, 2017 at 12:39 PM, Pradheep Shanmugam < pradheep.shanmu...@infor.com> wrote: > Hi Maryann > > Please find my answers inline. > > Thanks, > Pradheep > > From: Maryann Xue &

Re: Phoenix query performance

2017-02-22 Thread Maryann Xue
Hi Pradheep, Thank you for posting the query and the log file! There are two things going on on the server side at the same time here. I think it'd be a good idea to isolate the problem first. So a few questions: 1. When you say data size went from "< 1M" to 30M, did the data from both LHS and

Re: Hash join confusion

2016-10-06 Thread Maryann Xue
t; Best regards, > Sumit > > ---------- > *From:* Maryann Xue <maryann@gmail.com> > *To:* "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam < > sumit_o...@yahoo.com> > *Sent:* Wednesday, October 5, 2016 11:27 AM > *Subject:* Re:

Re: Hash join confusion

2016-10-04 Thread Maryann Xue
* "user@phoenix.apache.org" <user@phoenix.apache.org> > *Sent:* Wednesday, October 5, 2016 12:13 AM > > *Subject:* Re: Hash join confusion > > Thanks Maryann. > > I will share the details in a few hours. > > Under heavy load scenario, the default hash join failed w

Re: Hash join confusion

2016-09-28 Thread Maryann Xue
; jira/browse/PHOENIX-2900 issue. > > Switching to sort merge join helped. But not sure if that is the right > solution going forward. > > Thanks again! > Sumit > > > -- > *From:* Maryann Xue <maryann@gmail.com> > *To:* "user@phoenix.apach

Re: Hash join confusion

2016-09-28 Thread Maryann Xue
Yes, Sumit, the sub-query will get cached in hash join. Are you using multi-tenancy for these tables? If yes, you might want to checkout Phoenix 4.7 or 4.8, since a related bug fix got in the 4.7 release.

Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Maryann Xue
Thank you very much for your answer, Michael! Yes, what Cheyenne tried to use was simply not the right grammar. Thanks, Maryann On Mon, Sep 19, 2016 at 10:47 AM, Michael McAllister < mmcallis...@homeaway.com> wrote: > This is really an ANSI SQL question. If you use an aggregate function, >

Re: high client cpu usage

2016-08-25 Thread Maryann Xue
Hi John, Would you mind sharing the query plan for this query (by running "EXPLAIN ")? Thanks, Maryann On Thu, Aug 25, 2016 at 11:19 AM, John Leach wrote: > Yeah, this query. > > QUERY: > SELECT SUM(L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY > FROM > TPCH.LINEITEM, > TPCH.PART

Re: querying time for Apache Phoenix

2016-07-29 Thread Maryann Xue
Hi James, I have filed a JIRA https://issues.apache.org/jira/browse/PHOENIX-3129 for using global index for such queries without hint. Feel free to watch and comment on this issue. Thanks, Maryann On Wed, Jul 27, 2016 at 12:29 PM, James Taylor wrote: > On Wed, Jul 27,

Re: Phoenix Upsert with SELECT behaving strange

2016-05-17 Thread Maryann Xue
Hi Radha, Thanks for reporting this issue! Would you mind trying it with latest Phoenix version? Thanks, Maryann On Tue, May 17, 2016 at 8:19 AM, Radha krishna wrote: > Hi I am performing some join operation in phoenix console and storing the > result into another table

Re: JOIN returning incomplete result

2016-05-12 Thread Maryann Xue
Hi Pierre, Thank you very much for reporting this issue! Can you create a JIRA with all the information you've attached above along with the table DDL info? I'll take a look at it. Thanks, Maryann On Thu, May 12, 2016 at 6:18 AM, pierre lacave wrote: > Hi > > I am seeing

Re: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

2015-11-05 Thread Maryann Xue
ENIX-2381 > > > > Thanks, > > > > Don Brinn > > > > *From:* Maryann Xue [mailto:maryann@gmail.com] > *Sent:* Thursday, November 5, 2015 5:13 PM > *To:* user@phoenix.apache.org > *Subject:* Re: Error "could not find hash cache for joinId" when

Re: When will be the stats based join selector be implemented?

2015-10-05 Thread Maryann Xue
Hi Li, We are moving towards integrating with Calcite as our stats based optimization now. You can checkout our calcite branch and play with it if you are interested. It's still under development, but you

Re: When will be the stats based join selector be implemented?

2015-10-05 Thread Maryann Xue
ilable > > Where can I find these dependencies? > > Thanks, > > Li > > > > On Mon, Oct 5, 2015 at 12:19 PM, Maryann Xue <maryann@gmail.com> > wrote: > >> Hi Li, >> >> We are moving towards integrating with Calcite as our stats based >>

Re: failing integration tests in DerivedTableIT

2015-09-14 Thread Maryann Xue
Thank you, James! I have assigned the issue to myself. On Mon, Sep 14, 2015 at 7:39 AM James Heather wrote: > Reported as > > https://issues.apache.org/jira/browse/PHOENIX-2257 > > On 14/09/15 12:24, James Heather wrote: > > I also have two failing integration tests

Re: failing integration tests in DerivedTableIT

2015-09-14 Thread Maryann Xue
cate a fairly subtle bug, or reliance on a Java 7 > implementation detail that isn't contractual... > > James > > > On 14/09/15 18:38, Maryann Xue wrote: > > Thank you, James! I have assigned the issue to myself. > > On Mon, Sep 14, 2015 at 7:39 AM James Heather <james.he

Re: Error: Encountered exception in sub plan [0] execution.

2015-09-11 Thread Maryann Xue
Hi Alberto, Could you please check in your server log if there's an ERROR, probably something like InsufficientMemoryException? Thanks, Maryann On Fri, Sep 11, 2015 at 7:04 AM, Alberto Gonzalez Mesas wrote: > Hi! > > I create two tables: > > CREATE TABLE "Customers2"

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread Maryann Xue
Hi Aaron, As Jaime pointed out, it is a non-equi join. And unfortunately it is handled as CROSS join in Phoenix and thus is not very efficient. For each row from the left side, it will be joined with all of the rows from the right side before the condition is a applied to filter the joined

Could not find hash cache for joinId

2015-07-08 Thread Maryann Xue
Hi Alex, Could you please try this new patch? Thanks, Maryann On Wed, Jul 8, 2015 at 3:53 PM, Maryann Xue maryann@gmail.com javascript:_e(%7B%7D,'cvml','maryann@gmail.com'); wrote: Thanks again for all this information! Would you mind checking a couple more things for me

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
, Jul 7, 2015 at 1:20 PM, Maryann Xue maryann@gmail.com wrote: My question was actually if the problem appears on your cluster, will it go away if you just start a new process doing the same query? I do have a patch, but it only fixes the problem I assume here, and it might be something

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
Hi Alex, I suspect it's related to using cached region locations that might have been invalid. A simple way to verify this is try starting a new java process doing this query and see if the problem goes away. Thanks, Maryann On Mon, Jul 6, 2015 at 10:56 PM, Maryann Xue maryann@gmail.com

Re: Could not find hash cache for joinId

2015-07-03 Thread Maryann Xue
Hi Alex, Most likely what happened was as suggested by the error message: the cache might have expired. Could you please check if there are any Phoenix warnings in the client log and share your query? Thanks, Maryann On Fri, Jul 3, 2015 at 4:01 PM, Alex Kamil alex.ka...@gmail.com wrote:

Re: Join create OOM with java heap space on phoenix client

2015-06-30 Thread Maryann Xue
issue? Thanks, Krunal. From: Maryann Xue maryann@gmail.com Date: Tuesday, June 16, 2015 at 3:20 PM To: Krunal krunal.varaj...@ask.com Cc: user@phoenix.apache.org user@phoenix.apache.org Subject: Re: Join create OOM with java heap space on phoenix client Hi Krunal, Can you try

Re: Strategy on joining on partial keys

2015-06-30 Thread Maryann Xue
Hi Yiannis, Could you please post your UPSERT query and the approximate size of both tables? And does it happen every time you try to do the query? Thanks, Maryann On Mon, Jun 29, 2015 at 6:19 AM, Yiannis Gkoufas johngou...@gmail.com wrote: Hi there, I have two tables I want to join.

Re: count distinct

2015-06-23 Thread Maryann Xue
Which version of Phoenix are you using? On Tuesday, June 23, 2015, Michael McAllister mmcallis...@homeaway.com wrote: Hi (This questions relates to Phoenix 4.2 on HDP 2.2) I have a situation where I want to count the distinct combination of a couple of columns. When I try the

Re: count distinct

2015-06-23 Thread Maryann Xue
Sorry, I missed the first line. Your second query should work with Phoenix 4.3 or later. I will investigate the problem with the first one and get back to you. Thanks, Maryann On Tuesday, June 23, 2015, Michael McAllister mmcallis...@homeaway.com wrote: Hi (This questions relates to

Re: StackOverflowError

2015-06-22 Thread Maryann Xue
Hi Bahubali, Could you please share your query? Thanks, Maryann On Mon, Jun 22, 2015 at 12:51 PM, Bahubali Jain bahub...@gmail.com wrote: Hi, I am running into below error when I execute a query which has a join,group by and order by. But when I run the same query with hint /*+

Re: Limitation with limit?

2015-06-20 Thread Maryann Xue
Hi Bahubali, Thanks for reporting the issue! Could you please file a JIRA and add some details? I'll verify and fix it as soon as I can. Thanks, Maryann On Fri, Jun 19, 2015 at 3:53 AM, Bahubali Jain bahub...@gmail.com wrote: Hi, Is there any issue related to usage of limit ? select

Re: Join create OOM with java heap space on phoenix client

2015-06-01 Thread Maryann Xue
From: Maryann Xue maryann@gmail.com Reply-To: user@phoenix.apache.org user@phoenix.apache.org Date: Tuesday, May 26, 2015 at 5:45 PM To: user@phoenix.apache.org user@phoenix.apache.org Subject: Re: Join create OOM with java heap space on phoenix client Hi Krunal, Sorry for the late

Re: Join create OOM with java heap space on phoenix client

2015-05-26 Thread Maryann Xue
Hi Krunal, Sorry for the late reply. I have been on vacation. 1. Can you make sure that the connection/statement is closed after each run of your query (even with exception)? 2. You might want to try switching the join tables in your query first by putting the larger table as LHS, and if it

Re: Error when using aggregates with correlated subqueries

2015-04-26 Thread Maryann Xue
level only, and not up to two levels as I was doing? Maybe this limitation exists because Phoenix joins the tables from the outer and the inner correlated query, but it can perform this join up to one level only? Best, Khaleel On Sat, Apr 25, 2015 at 8:11 PM, Maryann Xue maryann

Re: Error when using aggregates with correlated subqueries

2015-04-26 Thread Maryann Xue
query) can be done up to a maximum one nested level, and not more than that. In other words, referring to a table from the outer query from within a subquery that is in the second or more nested level will generate an error. Best, Khaleel On Sun, Apr 26, 2015 at 6:33 PM, Maryann Xue maryann

Re: Error when using aggregates with correlated subqueries

2015-04-26 Thread Maryann Xue
, and with using IN instead of EXISTS and I will see if it produces an error. Best, Khaleel On Sun, Apr 26, 2015 at 7:14 PM, Maryann Xue maryann@gmail.com javascript:_e(%7B%7D,'cvml','maryann@gmail.com'); wrote: Hi Khaleel, Thanks for looking into the problem! But there IS a bug

Re: Error when using aggregates with correlated subqueries

2015-04-25 Thread Maryann Xue
tried such query and it still produces the same error as the old query, which is: (Aggregate may not contain columns not in GROUP BY.) Thank you for your help. Best, Khaleel On Thu, Apr 23, 2015 at 5:49 PM, Maryann Xue maryann@gmail.com wrote: Hi Khaleel, Thanks for the explanation

Re: Non-equi joins

2015-03-25 Thread Maryann Xue
Yes, 4.3 only. On Wed, Mar 25, 2015 at 1:25 PM, Jaime Solano jdjsol...@gmail.com wrote: Thanks for your response, Maryann! Again, this suggestion is for 4.3 only, right? On Mar 25, 2015 12:34 PM, Maryann Xue maryann@gmail.com wrote: Actually we do in 4.3, but apparently

Re: Non-equi joins

2015-03-25 Thread Maryann Xue
On Wed, Mar 25, 2015 at 1:59 PM, Maryann Xue maryann@gmail.com wrote: Yes, 4.3 only. On Wed, Mar 25, 2015 at 1:25 PM, Jaime Solano jdjsol...@gmail.com wrote: Thanks for your response, Maryann! Again, this suggestion is for 4.3 only, right? On Mar 25, 2015 12:34 PM, Maryann Xue maryann

Re: Using Hints in Phoenix

2015-03-19 Thread Maryann Xue
-based client) it ignores the hint! What do you think? Is there a way around this? Thanks! Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 17 March 2015 16:23 *To:* user@phoenix.apache.org *Subject:* Re: Using Hints in Phoenix Hi Matt, Sorry that I still could

Re: Using Hints in Phoenix

2015-03-19 Thread Maryann Xue
By the way, to answer your previous questions, Phoenix joins have not started to use stats so far, but the hints are parsed and handled in a universal way regardless of what type of query it is. Thanks, Maryann On Thu, Mar 19, 2015 at 12:03 PM, Maryann Xue maryann@gmail.com wrote: Hi Matt

Re: Using Hints in Phoenix

2015-03-19 Thread Maryann Xue
! Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 19 March 2015 16:08 *To:* user@phoenix.apache.org *Subject:* Re: Using Hints in Phoenix By the way, to answer your previous questions, Phoenix joins have not started to use stats so far, but the hints are parsed

Re: Using Hints in Phoenix

2015-03-12 Thread Maryann Xue
a major compaction, generate statistics, anything like that? Thanks! Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 11 March 2015 20:16 *To:* user@phoenix.apache.org *Subject:* Re: Using Hints in Phoenix Hi Matt, Views or tables should not matter in this. Would you

Re: Using Hints in Phoenix

2015-03-12 Thread Maryann Xue
-in message had a little mistake there. Could you please verify your Phoenix library version again, Matt? Especially the client. Thanks, Maryann On Thu, Mar 12, 2015 at 6:00 PM, Maryann Xue maryann@gmail.com wrote: Hi Matt, Thanks for sharing the query. Using that hint should supposedly force

Re: Using Hints in Phoenix

2015-03-11 Thread Maryann Xue
up this feature). I am actually joining views rather than tables – would this make a difference? Cheers, Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 10 March 2015 20:54 *To:* user@phoenix.apache.org *Subject:* Re: Using Hints in Phoenix Hi Matt, Which

Re: Using Hints in Phoenix

2015-03-10 Thread Maryann Xue
(SKIP MERGE) CLIENT 15-CHUNK PARALLEL 1-WAY FULL SCAN OVER mytable2 Cheers, Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 09 March 2015 15:00 *To:* user@phoenix.apache.org *Subject:* Re: Using Hints in Phoenix Hi Matt, So far in Phoenix, hints are only

Re: Using Hints in Phoenix

2015-03-09 Thread Maryann Xue
Hi Matt, So far in Phoenix, hints are only supported as specified right after keywords SELECT, UPSERT and DELETE. Same for join queries. It is currently impossible to hint a certain join algorithm for a specific join node in a multiple join query. However, for subqueries, the inner query can have

Re: Inner Join not returning any results in Phoenix

2015-02-24 Thread Maryann Xue
* *SERVER FILTER BY PageFilter 100* *SERVER 100 ROW LIMIT* *CLIENT 100 ROW LIMIT* I’m not really sure how to read that, but it does seem to suggest that ‘mytable2’ is being limited to 100 – thoughts? Cheers, Matt *From:* Maryann Xue [mailto:maryann@gmail.com

Re: Inner Join not returning any results in Phoenix

2015-02-24 Thread Maryann Xue
if it reproduces for you – are you using Squirrel? If so, what version? Thanks! Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 24 February 2015 16:41 *To:* user@phoenix.apache.org *Subject:* Re: Inner Join not returning any results in Phoenix Hi Matt, I just noticed

Re: Inner Join not returning any results in Phoenix

2015-02-20 Thread Maryann Xue
Hi Matt, The error you got with Limit Rows off might be related to insufficient memory on region servers for one of your tables. Which is the larger table between table1 and table2? You might want to try putting the larger table as the first table in your join query and see if it works. And I

Re: Inner Join not returning any results in Phoenix

2015-02-20 Thread Maryann Xue
if that helps. Thanks! Matt *From:* Maryann Xue [mailto:maryann@gmail.com] *Sent:* 20 February 2015 16:28 *To:* user@phoenix.apache.org *Subject:* Re: Inner Join not returning any results in Phoenix Hi Matt, The error you got with Limit Rows off might be related to insufficient

Re: Phoenix Subqueries with ‘IN’

2015-01-15 Thread Maryann Xue
Hi Xiaoguo, Do you mean you have hit a bug in Phoenix? The query is expected to return nothing but returns all rows? Thanks, Maryann On Thu, Jan 15, 2015 at 9:02 PM, 【小郭】 guoqid...@qq.com wrote: Hi guys: When using the subquery with 'IN',if the subquery return no rows,the query whill

Re: Phoenix Subqueries with ‘IN’

2015-01-15 Thread Maryann Xue
This has been verified as a bug. Just filed https://issues.apache.org/jira/browse/PHOENIX-1591 for it. Thank you very much for reporting this, Xiaoguo! You can expect it to be fixed in Phoenix 4.3. On Thu, Jan 15, 2015 at 10:43 PM, Maryann Xue maryann@gmail.com wrote: Hi Xiaoguo, Do you

Re: Query performance question

2014-12-18 Thread Maryann Xue
from around 177s to 8s! The explain plan now shows the entire pk being used. __ *Ralph Perko* Pacific Northwest National Laboratory (509) 375-2272 ralph.pe...@pnnl.gov From: Maryann Xue maryann@gmail.com Reply-To: user

Re: Query performance question

2014-12-15 Thread Maryann Xue
set to 1G. Custom hbase/phoenix settings are attached. Thanks, Ralph From: Maryann Xue maryann@gmail.com Reply-To: user@phoenix.apache.org user@phoenix.apache.org Date: Friday, December 12, 2014 at 8:07 AM To: user@phoenix.apache.org user@phoenix.apache.org Subject: Re: Query

Re: Query performance question

2014-12-15 Thread Maryann Xue
And one more thing: the version of Phoenix you are running. On Mon, Dec 15, 2014 at 3:21 PM, Maryann Xue maryann@gmail.com wrote: Hi Ralph, Thank you very much for the information! Very helpful for your questions. The numbers look reasonable as opposed to the query plan. But the only

Re: Query performance question

2014-12-12 Thread Maryann Xue
Hi Ralph, Thanks for the question! According to the explain result you got, the optimization worked exactly as expected with this query: DYNAMIC SERVER FILTER BY FILE_ID IN (SS.FILE_ID) means a skip-scan instead of a full-scan over BULK_TABLE will be executed at runtime based on the values of

Re: FW: Exception in sub plan[0] exception - multi inner join

2014-12-10 Thread Maryann Xue
Hi Siddharth, Thank you for attaching the log file! I didn't find any insufficient memory error, so my previous guess should be wrong. But unfortunately I couldn't seem to find any other useful information from the log regarding the exception you got. So the best way to identify the problem is

Re: Phoenix-136 did not support aggregate queries with derived tables in from clause

2014-12-05 Thread Maryann Xue
Hi Sun, Which version of Phoenix are you using? This feature is supported from 3.1 and 4.1. And there is no such error message in Phoenix code base now. Thanks, Maryann On Fri, Dec 5, 2014 at 3:16 AM, su...@certusnet.com.cn su...@certusnet.com.cn wrote: Hi,all Notice that PHOENIX-136

Re: PhoenixIOException - GlobalMemoryManager

2014-11-17 Thread Maryann Xue
Hi Ralph, I think this is a known issue reported as PHOENIX-1011 ( https://issues.apache.org/jira/browse/PHOENIX-1011). We are still looking at it. Will give you an update once it is solved. Thanks a lot for the very detailed information, Ralph! Thanks, Maryann On Mon, Nov 17, 2014 at 12:24

Re: PhoenixIOException - GlobalMemoryManager

2014-11-17 Thread Maryann Xue
Hi Ralph, You may want to check this problem against the latest release of Phoenix, coz we just incorporated a fix for a similar issue in our 3.2.1 RC1 and 4.2.1 RC1. Thanks, Maryann On Mon, Nov 17, 2014 at 6:32 PM, Maryann Xue maryann@gmail.com wrote: Hi Ralph, I think this is a known

Re: Getting InsufficientMemoryException

2014-09-30 Thread Maryann Xue
Hi Ashish, Could you please let us see your error message? Thanks, Maryann On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya ashishtapd...@gmail.com wrote: Hey Maryann, Thanks for your input. I tried both the properties but no luck. ~Ashish On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue

Re: Getting InsufficientMemoryException

2014-09-28 Thread Maryann Xue
...but phoenix.query.maxServerCacheBytes remains the default value of 100 MB. I get to see it when join fails. Thanks, ~Ashish On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue maryann@gmail.com wrote: Hi Ashish, The global cache size is set to either phoenix.query.maxServerCacheBytes

Re: Subqueries: Missing LPAREN

2014-09-25 Thread Maryann Xue
, We have already spotted PHOENIX-1168 and tracking it ;) Thanks for the patch! We have already downloaded it and will give it a try. JM 2014-09-24 15:39 GMT-04:00 Maryann Xue maryann@gmail.com: Hi JM, Think this sub-query feature is covered by PHOENIX-1168, for which a check

Re: JOIN and limit

2014-09-24 Thread Maryann Xue
Hi Abe, The expected behavior should be pushing the LIMIT to a (since it's left outer join) while checking the limit again against the final joined results. But it does not work as expected, it should be bug. Could you please verify it and report an issue with a test case attached? Thanks,

Re: Getting InsufficientMemoryException

2014-09-24 Thread Maryann Xue
Hi Vijay, I think here the query plan is scanning table *CUSTOMER_3 *while joining the other two tables at the same time, which means the region server memory for Phoenix should be large enough to hold 2 tables together and you also need to expect some memory expansion for java objects. Do