Re: Problem with UPSERT SELECT with CHAR field

2014-08-19 Thread Maryann Xue
Thank you Josh for reporting the issue! On Tue, Aug 19, 2014 at 5:27 PM, Josh Mahonin wrote: > To update the list, this bug appears to have been fixed: > > Issue was captured here: > https://issues.apache.org/jira/browse/PHOENIX-1182 > > And fixed here: > > https://github.com/apache/phoenix/com

Re: Subqueries: Missing "LPAREN"

2014-09-24 Thread Maryann Xue
Hi JM, Think this sub-query feature is covered by PHOENIX-1168, for which a check-in is expected very soon. Thanks, Maryann On Wed, Sep 24, 2014 at 9:06 AM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > Hi, > > Is it possible to run sub-queries with Phoenix? Something like this: > >

Re: JOIN and limit

2014-09-24 Thread Maryann Xue
Hi Abe, The expected behavior should be pushing the LIMIT to a (since it's left outer join) while checking the limit again against the final joined results. But it does not work as expected, it should be bug. Could you please verify it and report an issue with a test case attached? Thanks, Mary

Re: Getting InsufficientMemoryException

2014-09-24 Thread Maryann Xue
Hi Vijay, I think here the query plan is scanning table *CUSTOMER_3 *while joining the other two tables at the same time, which means the region server memory for Phoenix should be large enough to hold 2 tables together and you also need to expect some memory expansion for java objects. Do yo

Re: Subqueries: Missing "LPAREN"

2014-09-25 Thread Maryann Xue
> Hi Maryann, > > We have already spotted PHOENIX-1168 and tracking it ;) Thanks for the > patch! > > We have already downloaded it and will give it a try. > > JM > > 2014-09-24 15:39 GMT-04:00 Maryann Xue : > > Hi JM, >> >> Think this sub-query feature

Re: Getting InsufficientMemoryException

2014-09-26 Thread Maryann Xue
gion server or just the file present in the class > path of Phoenix client? > > Regards, > Vijay Raajaa G S > > On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue > wrote: > >> Hi Vijay, >> >> I think here the query plan is scanning table *CUSTOMER_3 *while &

Re: Getting InsufficientMemoryException

2014-09-26 Thread Maryann Xue
> hbase-site.xml's. However, it does not take effect. > > Phoenix 3.1 > HBase .94 > > Thanks, > ~Ashish > > On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue > wrote: > >> Yes, you should make your modification on each region server, since this >

Re: Getting InsufficientMemoryException

2014-09-28 Thread Maryann Xue
as well...but "phoenix.query.maxServerCacheBytes" > remains the default value of 100 MB. I get to see it when join fails. > > Thanks, > ~Ashish > > On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue > wrote: > >> Hi Ashish, >> >> The global cache size

Re: Getting InsufficientMemoryException

2014-09-30 Thread Maryann Xue
Hi Ashish, Could you please let us see your error message? Thanks, Maryann On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya wrote: > Hey Maryann, > > Thanks for your input. I tried both the properties but no luck. > > ~Ashish > > On Sun, Sep 28, 2014 at 8:31 PM, Maryann

Re: Getting InsufficientMemoryException

2014-10-04 Thread Maryann Xue
; at java.lang.Thread.run(Thread.java:744) > > I am setting hbase heap to 4 GB and phoenix properties are set as below > > > phoenix.query.maxServerCacheBytes > 2004857600 > > > phoenix.query.maxGlobalMemoryPercentage > 40 > >

Re: Getting InsufficientMemoryException

2014-10-07 Thread Maryann Xue
On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya wrote: > Maryann, > > hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for the > help. I appreciate it. > > ~Ashish > > > > On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue wr

Re: Subqueries: Missing "LPAREN"

2014-10-07 Thread Maryann Xue
t; hear back from you. > > JM > > 2014-09-25 11:44 GMT-04:00 Maryann Xue : > > Hi JM, >> >> Sorry that I made a mistake earlier. Your query should be covered by >> https://issues.apache.org/jira/browse/PHOENIX-945. Will keep you updated >> on the progress of t

Re: Getting InsufficientMemoryException

2014-10-09 Thread Maryann Xue
>>> >>> >>> Thanks, >>> Maryann >>> >>> >>> On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya >> > wrote: >>> >>>> Maryann, >>>> >>>> hbase-site.xml was not on CLASSPATH and that was t

Re: PhoenixIOException - GlobalMemoryManager

2014-11-17 Thread Maryann Xue
Hi Ralph, I think this is a known issue reported as PHOENIX-1011 ( https://issues.apache.org/jira/browse/PHOENIX-1011). We are still looking at it. Will give you an update once it is solved. Thanks a lot for the very detailed information, Ralph! Thanks, Maryann On Mon, Nov 17, 2014 at 12:24 PM

Re: PhoenixIOException - GlobalMemoryManager

2014-11-17 Thread Maryann Xue
Hi Ralph, You may want to check this problem against the latest release of Phoenix, coz we just incorporated a fix for a similar issue in our 3.2.1 RC1 and 4.2.1 RC1. Thanks, Maryann On Mon, Nov 17, 2014 at 6:32 PM, Maryann Xue wrote: > Hi Ralph, > > I think this is a known issue re

FW: Hi

2014-11-18 Thread Maryann Xue
Hi Siddharth, It's not clear what inner exception you are getting. Would be nice if you can post the entire stack trace of the exception. Anyway, one of the possible reasons could be insufficient memory error, coz the query is by default executed as a star join which means the latter three tables

Re: query timeouts

2014-11-24 Thread Maryann Xue
Hi Ralph, I am not sure if the problem is Phoenix specific. Looks to me that it might be related to https://issues.apache.org/jira/browse/HBASE-11295. However, this setting value seems to be too large. If the default 6 ms for "hbase.rpc.timeout" is not working, you might want to multiply the

Re: Phoenix-136 did not support aggregate queries with derived tables in from clause

2014-12-05 Thread Maryann Xue
Hi Sun, Which version of Phoenix are you using? This feature is supported from 3.1 and 4.1. And there is no such error message in Phoenix code base now. Thanks, Maryann On Fri, Dec 5, 2014 at 3:16 AM, su...@certusnet.com.cn < su...@certusnet.com.cn> wrote: > Hi,all > Notice that PHOENIX-136 >

Re: Re: Phoenix-136 did not support aggregate queries with derived tables in from clause

2014-12-05 Thread Maryann Xue
count after groupby query. Noting that in > mysql or oracle this kind of query works well. > > Is there any available alternative approach to get the results using the > current sql support? If so, please kindly tell me. > > Thanks, > Sun. > -- &g

Re: FW: Exception in sub plan[0] exception - multi inner join

2014-12-10 Thread Maryann Xue
Hi Siddharth, Thank you for attaching the log file! I didn't find any insufficient memory error, so my previous guess should be wrong. But unfortunately I couldn't seem to find any other useful information from the log regarding the exception you got. So the best way to identify the problem is to

Re: Query performance question

2014-12-12 Thread Maryann Xue
Hi Ralph, Thanks for the question! According to the "explain" result you got, the optimization worked exactly as expected with this query: "DYNAMIC SERVER FILTER BY FILE_ID IN (SS.FILE_ID)" means a skip-scan instead of a full-scan over BULK_TABLE will be executed at runtime based on the values of

Re: Query performance question

2014-12-15 Thread Maryann Xue
elect file_id, recnum from > BULK_TABLE) as SS on BULK_TABLE.file_id = SS.file_id and BULK_TABLE.recnum > = SS.recnum”? > > The full-scan join fails with a MaxServerCacheSizeExceededException - > server cache set to 1G. > > Custom hbase/phoenix settings are attached. > > Thanks, > Ralph

Re: Query performance question

2014-12-15 Thread Maryann Xue
And one more thing: the version of Phoenix you are running. On Mon, Dec 15, 2014 at 3:21 PM, Maryann Xue wrote: > > Hi Ralph, > > Thank you very much for the information! Very helpful for your questions. > The numbers look reasonable as opposed to the query plan. But the only &g

Re: Query performance question

2014-12-15 Thread Maryann Xue
Perko > Reply-To: "user@phoenix.apache.org" > Date: Monday, December 15, 2014 at 12:37 PM > > To: "user@phoenix.apache.org" > Subject: Re: Query performance question > >DDL is attached – thanks! > > Ralph > > > From: Maryann Xue >

Re: FW: Join Queries

2014-12-16 Thread Maryann Xue
Hi Siddharth, Could you please run "explain " and post the query plan you got? Thanks, Maryann On Tue, Dec 16, 2014 at 9:31 AM, Siddharth Ubale < siddharth.ub...@syncoms.com> wrote: > > Just a correction; > > > > Order_details :40 > > Orders :20 > > Rest of the tables 10 > > > > Th

Re: Query performance question

2014-12-18 Thread Maryann Xue
from around 177s to 8s! The explain plan now shows the > entire pk being used. > >__ > *Ralph Perko* > Pacific Northwest National Laboratory > (509) 375-2272 > ralph.pe...@pnnl.gov > > > From: Maryann Xue >

Re: Phoenix Subqueries with ‘IN’

2015-01-15 Thread Maryann Xue
Hi Xiaoguo, Do you mean you have hit a bug in Phoenix? The query is expected to return nothing but returns all rows? Thanks, Maryann On Thu, Jan 15, 2015 at 9:02 PM, 【小郭】 wrote: > Hi guys: > When using the subquery with 'IN',if the subquery return no rows,the > query whill find all rows.

Re: Phoenix Subqueries with ‘IN’

2015-01-15 Thread Maryann Xue
This has been verified as a bug. Just filed https://issues.apache.org/jira/browse/PHOENIX-1591 for it. Thank you very much for reporting this, Xiaoguo! You can expect it to be fixed in Phoenix 4.3. On Thu, Jan 15, 2015 at 10:43 PM, Maryann Xue wrote: > Hi Xiaoguo, > > Do you mean you h

Re: indexed query question

2015-01-19 Thread Maryann Xue
Hi Ralph, I think in your case this is indeed a nice approach. Given that INTERSECT is not yet supported in Phoenix, you can instead use AND to connect your conditions, which would work almost as efficiently as applying INTERSECT on your inner queries: SELECT * FROM t WHERE pk IN (SELECT pk from

Re: Phoenix Subqueries with ‘IN’

2015-01-27 Thread Maryann Xue
---+ > > The "Query2" return not right,it should return as same as the "Query1"‍ > > > -- 原始邮件 -- > *发件人:* "Maryann Xue";; > *发送时间:* 2015年1月16日(星期五) 中午12:09 > *收件人:* "user@phoenix.apache.

Re: Inner Join not returning any results in Phoenix

2015-02-20 Thread Maryann Xue
Hi Matt, The error you got with "Limit Rows" off might be related to insufficient memory on region servers for one of your tables. Which is the larger table between table1 and table2? You might want to try putting the larger table as the first table in your join query and see if it works. And I w

Re: Inner Join not returning any results in Phoenix

2015-02-20 Thread Maryann Xue
rather than the SELECT? > > > > I will try increasing the memory available to the Region Servers as well > to see if that helps. > > > > Thanks! > > Matt > > > > > > *From:* Maryann Xue [mailto:maryann@gmail.com] > *Sent:* 20 February 2015 16:28

Re: Inner Join not returning any results in Phoenix

2015-02-20 Thread Maryann Xue
t in the wrong place? Does it need to go next to the > JOIN rather than the SELECT? > > > > I will try increasing the memory available to the Region Servers as well > to see if that helps. > > > > Thanks! > > Matt > > > > > > *From:* Maryann Xu

Re: Inner Join not returning any results in Phoenix

2015-02-23 Thread Maryann Xue
t working. Is upgrading versions of Phoenix > as simple as removing the previous jar from HBase lib folder and dropping > the new Phoenix jar in (and restarting HBase)? Will all the existing > Phoenix tables and views be backwards-compatible and work with the new > version? &

Re: Inner Join not returning any results in Phoenix

2015-02-24 Thread Maryann Xue
t; > I’m not really sure how to read that, but it does seem to suggest that > ‘mytable2’ is being limited to 100 – thoughts? > > > > Cheers, > > Matt > > > > *From:* Maryann Xue [mailto:maryann@gmail.com] > *Sent:* 23 February 2015 18:10 > > *To:* user

Re: Inner Join not returning any results in Phoenix

2015-02-24 Thread Maryann Xue
t; > > On Tuesday, February 24, 2015, Maryann Xue wrote: > >> Thanks a lot, Matt, for the reply! Very helpful. "*SERVER FILTER BY >> PageFilter 100*" does look like a but here. I will try again to >> reproduce it. >> >> >> Thanks, >> Ma

Re: Inner Join not returning any results in Phoenix

2015-02-24 Thread Maryann Xue
ta was much > more coherent (because it was manually created for a specific test) so the > RHS table always matches the LHS and therefore the join gives me results. > > > > I have attached a script that demonstrates my problem (create 2 Phoenix > tables, insert some rows, and

Re: Using Hints in Phoenix

2015-03-09 Thread Maryann Xue
Hi Matt, So far in Phoenix, hints are only supported as specified right after keywords SELECT, UPSERT and DELETE. Same for join queries. It is currently impossible to hint a certain join algorithm for a specific join node in a multiple join query. However, for subqueries, the inner query can have

Re: Using Hints in Phoenix

2015-03-10 Thread Maryann Xue
le1 > > SERVER AGGREGATE INTO SINGLE ROW > > PARALLEL INNER-JOIN TABLE 0 (SKIP MERGE) > > CLIENT 15-CHUNK PARALLEL 1-WAY FULL SCAN OVER mytable2 > > > > Cheers, > > Matt > > > > > > *From:* Maryann Xue [mailto:maryann@gmail.co

Re: Using Hints in Phoenix

2015-03-11 Thread Maryann Xue
ck up this feature). I am actually joining views rather than tables > – would this make a difference? > > > > Cheers, > > Matt > > > > *From:* Maryann Xue [mailto:maryann@gmail.com] > *Sent:* 10 March 2015 20:54 > > *To:* user@phoenix.apache.org > *Subj

Re: Using Hints in Phoenix

2015-03-12 Thread Maryann Xue
uot;bId" = m2."bId"* > > > > *CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER mytest1* > > *SERVER AGGREGATE INTO SINGLE ROW* > > *PARALLEL INNER-JOIN TABLE 0 (SKIP MERGE)* > > * CLIENT 5-CHUNK PARALLEL 1-WAY FULL SCAN OVER mytest2* > > &

Re: Using Hints in Phoenix

2015-03-12 Thread Maryann Xue
message had a little mistake there. Could you please verify your Phoenix library version again, Matt? Especially the client. Thanks, Maryann On Thu, Mar 12, 2015 at 6:00 PM, Maryann Xue wrote: > Hi Matt, > > Thanks for sharing the query. Using that hint should supposedly force > sor

Re: Using Hints in Phoenix

2015-03-17 Thread Maryann Xue
And I have opened up the jar, and can see the new class > *SortMergeJoinPlan.class*, so presumably I have the right version – is > there anything else I can check? > > > > Cheers, > > Matt > > > > *From:* Maryann Xue [mailto:maryann@gmail.com] > *Sent:* 12 March

Re: Using Hints in Phoenix

2015-03-19 Thread Maryann Xue
} > > > > stmtLimited.close(); > > conn.close(); > > > > And I get: > > > > Executing statement > > CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER TESTTABLE1 > > PARALLEL INNER-JOIN TABLE 0 >

Re: Using Hints in Phoenix

2015-03-19 Thread Maryann Xue
By the way, to answer your previous questions, Phoenix joins have not started to use stats so far, but the hints are parsed and handled in a universal way regardless of what type of query it is. Thanks, Maryann On Thu, Mar 19, 2015 at 12:03 PM, Maryann Xue wrote: > Hi Matt, > > Thank

Re: Using Hints in Phoenix

2015-03-19 Thread Maryann Xue
|* > > *| CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER TESTTABLE2 |* > > *| SERVER SORTED BY [T2.ROWID] |* > > *| CLIENT MERGE SORT|* > > *+--+* > > *8 rows selected (0.032 se

Re: Non-equi joins

2015-03-25 Thread Maryann Xue
Actually we do in 4.3, but apparently not in an efficient way. If no equi conditions are specified, Phoenix simply does cross join and applies a post filter to the cross joined results. And we do not support non-equi conditions in ON clause, so non-equi outer join is currently impossible. But if yo

Re: Non-equi joins

2015-03-25 Thread Maryann Xue
Yes, 4.3 only. On Wed, Mar 25, 2015 at 1:25 PM, Jaime Solano wrote: > Thanks for your response, Maryann! > > Again, this suggestion is for 4.3 only, right? > On Mar 25, 2015 12:34 PM, "Maryann Xue" wrote: > >> Actually we do in 4.3, but apparently not

Re: Non-equi joins

2015-03-25 Thread Maryann Xue
; > On Wed, Mar 25, 2015 at 1:59 PM, Maryann Xue > wrote: > >> Yes, 4.3 only. >> >> On Wed, Mar 25, 2015 at 1:25 PM, Jaime Solano >> wrote: >> >>> Thanks for your response, Maryann! >>> >>> Again, this suggestion is for 4.3 only, righ

Re: indexed query question

2015-04-07 Thread Maryann Xue
those "or" conditions. For example: SELECT * FROM t WHERE pk IN (SELECT pk from t where q1 = ? UNION ALL SELECT pk from t where q2 = ?); Thanks, Maryann On Mon, Jan 19, 2015 at 2:34 PM, Maryann Xue wrote: > Hi Ralph, > > I think in your case this is indeed a nice a

Re: Error when using aggregates with correlated subqueries

2015-04-22 Thread Maryann Xue
Hi Khaleel, Thanks a lot for reporting the problem, which looks like a bug. I will file a JIRA and keep you posted. One question though, why would we use MAX(H."timestamp") instead of H."timestamp"? What difference would it make? Thanks, Maryann On Wed, Apr 22, 2015 at 9:45 AM, khaleel mershad

Re: Error when using aggregates with correlated subqueries

2015-04-23 Thread Maryann Xue
recent) that is > "approved" (which is checked using the condition H."status" = 'approved') > > > > Best Regards, > Khaleel > > On Thu, Apr 23, 2015 at 6:22 AM, Maryann Xue > wrote: > >> Hi Khaleel, >> >> Thanks a lot for r

Re: Error when using aggregates with correlated subqueries

2015-04-25 Thread Maryann Xue
tamp") from "History" AS H2 > where (H2."rowId" = W."rowId")) AND > (H."status" = 'approved') ) > > In this query I specify that I need to select the exact version which has > the Maximum timestamp among all versions of the same data

Re: Error when using aggregates with correlated subqueries

2015-04-26 Thread Maryann Xue
rick here is that we can use the reference to the outer query > within the next subquery level only, and not up to two levels as I was > doing? Maybe this limitation exists because Phoenix joins the tables from > the outer and the inner correlated query, but it can perform this join up &g

Re: Error when using aggregates with correlated subqueries

2015-04-26 Thread Maryann Xue
9),'[^:]+'), > the query works fine (which is the query from my last email). So I think > the problem is not with EXISTS, but with the fact that a reference to a > table from the outer query (W in my query) can be done up to a maximum one > nested level, and not more than that. In o

Re: Error when using aggregates with correlated subqueries

2015-04-26 Thread Maryann Xue
ith using IN instead of EXISTS and I will see if it > produces an error. > > > > Best, > Khaleel > > On Sun, Apr 26, 2015 at 7:14 PM, Maryann Xue > wrote: > >> Hi Khaleel, >> >> Thanks for looking into the problem! But there IS a bug with EXISTS >

Re: Join create OOM with java heap space on phoenix client

2015-05-26 Thread Maryann Xue
Hi Krunal, Sorry for the late reply. I have been on vacation. 1. Can you make sure that the connection/statement is closed after each run of your query (even with exception)? 2. You might want to try switching the join tables in your query first by putting the larger table as LHS, and if it stil

Re: Join create OOM with java heap space on phoenix client

2015-06-01 Thread Maryann Xue
a! > - Krunal > > > From: Maryann Xue > Reply-To: "user@phoenix.apache.org" > Date: Tuesday, May 26, 2015 at 5:45 PM > To: "user@phoenix.apache.org" > Subject: Re: Join create OOM with java heap space on phoenix client > > Hi Krunal, > > So

Re: Join create OOM with java heap space on phoenix client

2015-06-16 Thread Maryann Xue
has any idea why below join is throwing OOM error? I will > really appreciate any help here. We are stuck here is as none of our join > works even with 5M rows. > > From: Krunal > Reply-To: "user@phoenix.apache.org" > Date: Wednesday, June 10, 2015 at

Re: Limitation with limit?

2015-06-20 Thread Maryann Xue
Hi Bahubali, Thanks for reporting the issue! Could you please file a JIRA and add some details? I'll verify and fix it as soon as I can. Thanks, Maryann On Fri, Jun 19, 2015 at 3:53 AM, Bahubali Jain wrote: > Hi, > Is there any issue related to usage of limit ? > select table1.x1,count(table.

Re: StackOverflowError

2015-06-22 Thread Maryann Xue
Hi Bahubali, Could you please share your query? Thanks, Maryann On Mon, Jun 22, 2015 at 12:51 PM, Bahubali Jain wrote: > Hi, > I am running into below error when I execute a query which has a > join,group by and order by. > But when I run the same query with hint /*+ USE_SORT_MERGE_JOIN*/ ,

Re: count distinct

2015-06-23 Thread Maryann Xue
Which version of Phoenix are you using? On Tuesday, June 23, 2015, Michael McAllister wrote: > Hi > > (This questions relates to Phoenix 4.2 on HDP 2.2) > > I have a situation where I want to count the distinct combination of a > couple of columns. > > When I try the following:- > > select

Re: count distinct

2015-06-23 Thread Maryann Xue
Sorry, I missed the first line. Your second query should work with Phoenix 4.3 or later. I will investigate the problem with the first one and get back to you. Thanks, Maryann On Tuesday, June 23, 2015, Michael McAllister wrote: > Hi > > (This questions relates to Phoenix 4.2 on HDP 2.2) >

Re: Join create OOM with java heap space on phoenix client

2015-06-30 Thread Maryann Xue
insufficient memory for the Java Runtime Environment to > continue. > > # Native memory allocation (mmap) failed to map 12288 bytes for committing > reserved memory. > > # An error report file with more information is saved as: > > # /home/varajiyak/phoenix/bin/hs_err_pid3404

Re: Strategy on joining on partial keys

2015-06-30 Thread Maryann Xue
Hi Yiannis, Could you please post your UPSERT query and the approximate size of both tables? And does it happen every time you try to do the query? Thanks, Maryann On Mon, Jun 29, 2015 at 6:19 AM, Yiannis Gkoufas wrote: > Hi there, > > I have two tables I want to join. > > TABLE_A: ( (A,B), C

Re: Could not find hash cache for joinId

2015-07-03 Thread Maryann Xue
Hi Alex, Most likely what happened was as suggested by the error message: the cache might have expired. Could you please check if there are any Phoenix warnings in the client log and share your query? Thanks, Maryann On Fri, Jul 3, 2015 at 4:01 PM, Alex Kamil wrote: > getting this error with

Re: Could not find hash cache for joinId

2015-07-06 Thread Maryann Xue
ngs >>> SELECT '\''||C.ROWKEY||'\'' AS RK, C.VS FROM test.table1 AS C JOIN >>> (SELECT DISTINCT B.ROWKEY, B.VS FROM test.table2 AS B) B ON >>> (C.ROWKEY=B.ROWKEY AND C.VS=B.VS) LIMIT 2147483647; >>> >>> thanks >>> Ale

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
Hi Alex, I suspect it's related to using cached region locations that might have been invalid. A simple way to verify this is try starting a new java process doing this query and see if the problem goes away. Thanks, Maryann On Mon, Jul 6, 2015 at 10:56 PM, Maryann Xue wrote: > Than

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
wrote: > a patch would be great, we saw that this problem goes away in standalone > mode but reappears on the cluster > > On Tue, Jul 7, 2015 at 12:56 PM, Alex Kamil wrote: > >> sure, sounds good >> >> On Tue, Jul 7, 2015 at 10:57 AM, Maryann Xue >> wrote:

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
ther client? > > thanks > Alex > > On Tue, Jul 7, 2015 at 1:20 PM, Maryann Xue wrote: > >> My question was actually if the problem appears on your cluster, will it >> go away if you just start a new process doing the same query? I do have a >> patch, but it only f

Re: Could not find hash cache for joinId

2015-07-08 Thread Maryann Xue
only on the cluster but works in stand-alone mode > > - are there any settings to be set on server or client side in the code or > in hbase-site.xml to enable multitenancy? > - were there any bug fixes related to multitenancy or cache management in > joins since 3.3.0 > > thank

Could not find hash cache for joinId

2015-07-08 Thread Maryann Xue
Hi Alex, Could you please try this new patch? Thanks, Maryann On Wed, Jul 8, 2015 at 3:53 PM, Maryann Xue > wrote: > Thanks again for all this information! Would you mind checking a couple > more things for me? For test.table1, does it have its regions on all region > servers in

Re: Error: Encountered exception in sub plan [0] execution.

2015-09-11 Thread Maryann Xue
Hi Alberto, Could you please check in your server log if there's an ERROR, probably something like InsufficientMemoryException? Thanks, Maryann On Fri, Sep 11, 2015 at 7:04 AM, Alberto Gonzalez Mesas wrote: > Hi! > > I create two tables: > > CREATE TABLE "Customers2" ("CustomerID" VARCHAR NOT

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread Maryann Xue
Hi Aaron, As Jaime pointed out, it is a non-equi join. And unfortunately it is handled as CROSS join in Phoenix and thus is not very efficient. For each row from the left side, it will be joined with all of the rows from the right side before the condition is a applied to filter the joined result.

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread Maryann Xue
omewhere to bump up? > > On Fri, Sep 11, 2015 at 10:45 AM, Maryann Xue > wrote: > >> Hi Aaron, >> >> As Jaime pointed out, it is a non-equi join. And unfortunately it is >> handled as CROSS join in Phoenix and thus is not very efficient. For each >> row

Re: failing integration tests in DerivedTableIT

2015-09-14 Thread Maryann Xue
Thank you, James! I have assigned the issue to myself. On Mon, Sep 14, 2015 at 7:39 AM James Heather wrote: > Reported as > > https://issues.apache.org/jira/browse/PHOENIX-2257 > > On 14/09/15 12:24, James Heather wrote: > > I also have two failing integration tests in DerivedTableIT: > > > > Fa

Re: failing integration tests in DerivedTableIT

2015-09-14 Thread Maryann Xue
bug, or reliance on a Java 7 > implementation detail that isn't contractual... > > James > > > On 14/09/15 18:38, Maryann Xue wrote: > > Thank you, James! I have assigned the issue to myself. > > On Mon, Sep 14, 2015 at 7:39 AM James Heather > wrote: > >> Reported

Re: Error: Encountered exception in sub plan [0] execution.

2015-09-21 Thread Maryann Xue
Hi Alberto, Please make sure you have setup your environment correctly according to https://phoenix.apache.org/installation.html. Thanks, Maryann On Mon, Sep 21, 2015 at 12:46 PM, Alberto Gonzalez Mesas < agme...@hotmail.com> wrote: > cause: not found the phoenix client library in HBASE_HOME/l

Re: When will be the stats based join selector be implemented?

2015-10-05 Thread Maryann Xue
Hi Li, We are moving towards integrating with Calcite as our stats based optimization now. You can checkout our calcite branch and play with it if you are interested. It's still under development, but you can

Re: When will be the stats based join selector be implemented?

2015-10-05 Thread Maryann Xue
d these dependencies? > > Thanks, > > Li > > > > On Mon, Oct 5, 2015 at 12:19 PM, Maryann Xue > wrote: > >> Hi Li, >> >> We are moving towards integrating with Calcite as our stats based >> optimization now. You can checkout our calcite >&

Re: When will be the stats based join selector be implemented?

2015-10-08 Thread Maryann Xue
in the code >- pointers to how the filter predicate push down is implemented in the >code > > Examples would be greatly appreciated. > > Thanks, > Li > > > On Mon, Oct 5, 2015 at 5:49 PM, Maryann Xue wrote: > >> Hi Li, >> >> Sorry, I forgot to m

Re: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

2015-11-05 Thread Maryann Xue
Hi Don, Thank you very much for finding the issue. Would mind filing a Phoenix JIRA? Thanks, Maryann On Thu, Nov 5, 2015 at 3:08 PM Don Brinn wrote: > Hi, > > > > I am seeing the following error when doing an INNER JOIN of a view with > MULTI_TENANT=true with any other table or view: > > java

Re: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

2015-11-05 Thread Maryann Xue
> Thanks, > > > > Don Brinn > > > > *From:* Maryann Xue [mailto:maryann@gmail.com] > *Sent:* Thursday, November 5, 2015 5:13 PM > *To:* user@phoenix.apache.org > *Subject:* Re: Error "could not find hash cache for joinId" when doing > Inner Join with any

Re: JOIN returning incomplete result

2016-05-12 Thread Maryann Xue
Hi Pierre, Thank you very much for reporting this issue! Can you create a JIRA with all the information you've attached above along with the table DDL info? I'll take a look at it. Thanks, Maryann On Thu, May 12, 2016 at 6:18 AM, pierre lacave wrote: > Hi > > I am seeing weird result with joi

Re: Phoenix Upsert with SELECT behaving strange

2016-05-17 Thread Maryann Xue
Hi Radha, Thanks for reporting this issue! Would you mind trying it with latest Phoenix version? Thanks, Maryann On Tue, May 17, 2016 at 8:19 AM, Radha krishna wrote: > Hi I am performing some join operation in phoenix console and storing the > result into another table but the same query some

Re: querying time for Apache Phoenix

2016-07-29 Thread Maryann Xue
Hi James, I have filed a JIRA https://issues.apache.org/jira/browse/PHOENIX-3129 for using global index for such queries without hint. Feel free to watch and comment on this issue. Thanks, Maryann On Wed, Jul 27, 2016 at 12:29 PM, James Taylor wrote: > On Wed, Jul 27, 2016 at 8:07 AM, Heather

Re: high client cpu usage

2016-08-25 Thread Maryann Xue
Hi John, Would you mind sharing the query plan for this query (by running "EXPLAIN ")? Thanks, Maryann On Thu, Aug 25, 2016 at 11:19 AM, John Leach wrote: > Yeah, this query. > > QUERY: > SELECT SUM(L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY > FROM > TPCH.LINEITEM, > TPCH.PART > WHERE > P_PARTKE

Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Maryann Xue
Thank you very much for your answer, Michael! Yes, what Cheyenne tried to use was simply not the right grammar. Thanks, Maryann On Mon, Sep 19, 2016 at 10:47 AM, Michael McAllister < mmcallis...@homeaway.com> wrote: > This is really an ANSI SQL question. If you use an aggregate function, > then

Re: Hash join confusion

2016-09-28 Thread Maryann Xue
Yes, Sumit, the sub-query will get cached in hash join. Are you using multi-tenancy for these tables? If yes, you might want to checkout Phoenix 4.7 or 4.8, since a related bug fix got in the 4.7 release. https://issues.apache.org/jira/browse/PHOENIX-2381?jql=project%20%3D%20PHOENIX%20AND%20text%20

Re: Hash join confusion

2016-09-28 Thread Maryann Xue
900 issue. > > Switching to sort merge join helped. But not sure if that is the right > solution going forward. > > Thanks again! > Sumit > > > -- > *From:* Maryann Xue > *To:* "user@phoenix.apache.org" ; Sumit Nigam < > s

Re: Hash join confusion

2016-10-01 Thread Maryann Xue
that > be enough or I need to sort both the driving query and subquery with same > order by for merge sort? > > As an aside, is there a document to interpret explain plan? > > Thanks, > Sumit > > ------ > *From:* Maryann Xue > *To:* Sumit

Re: Hash join confusion

2016-10-04 Thread Maryann Xue
algorithm would be enough. I would > assume that changing the hash join to sort-merge join would not alter the > query results, right? Do I need to re-write my query? > > I am using global index. > > Thanks, > Sumit > > ---------- > *From:*

Re: Hash join confusion

2016-10-04 Thread Maryann Xue
- > *From:* Sumit Nigam > *To:* "user@phoenix.apache.org" > *Sent:* Wednesday, October 5, 2016 12:13 AM > > *Subject:* Re: Hash join confusion > > Thanks Maryann. > > I will share the details in a few hours. > > Under heavy load scenario, the defaul

Re: Hash join confusion

2016-10-06 Thread Maryann Xue
t regards, > Sumit > > ---------- > *From:* Maryann Xue > *To:* "user@phoenix.apache.org" ; Sumit Nigam < > sumit_o...@yahoo.com> > *Sent:* Wednesday, October 5, 2016 11:27 AM > *Subject:* Re: Hash join confusion > > Not sure if it's related, coz your DDL does not h

Re: Phoenix query performance

2017-02-22 Thread Maryann Xue
Hi Pradheep, Thank you for posting the query and the log file! There are two things going on on the server side at the same time here. I think it'd be a good idea to isolate the problem first. So a few questions: 1. When you say data size went from "< 1M" to 30M, did the data from both LHS and RHS

Re: Phoenix query performance

2017-02-22 Thread Maryann Xue
Hi Pradheep, Thank you for the answers! Please see my response inline. On Wed, Feb 22, 2017 at 12:39 PM, Pradheep Shanmugam < pradheep.shanmu...@infor.com> wrote: > Hi Maryann > > Please find my answers inline. > > Thanks, > Pradheep > > From: Maryann Xue > R

Re: Phoenix join on derived table and documentation

2017-09-22 Thread Maryann Xue
Hi Ryan, Could you please try: select MAIN.EBELN as PO_DOC, MAIN.EBELP as PO_DOC_ITEM_NUM, XREF.DELV_DOC from EKPO as MAIN left outer join (select distinct EBELN as PO_DOC, EBELP as PO_DOC_ITEM_NUM, BELNR as DELV_DOC from EKBE where BEWTP = 'L' and MENGE != 0 and VGABE = '8') as XREF on

Re: SELECT + ORDER BY vs self-join

2017-10-30 Thread Maryann Xue
I suspect this problem is similar to PHOENIX-4288. On Mon, Oct 30, 2017 at 11:26 PM James Taylor wrote: > Please file a JIRA and include the explain plan for each of the queries. I > suspect your index is not being used in the first query due to the > selection of all the columns. You can try hin

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-08 Thread Maryann Xue
Hi Gerald, Thank you for finding this issue! I think it is similar to PHOENIX-4508. I'll verify your case on the latest Phoenix branch and see if it has been fixed. Thanks, Maryann On Tue, May 8, 2018 at 12:24 PM, Gerald Sangudi wrote: > Hello, > > I'm running Phoenix 4.13 on AWS EMR and gett

Re: SORT_MERGE_JOIN on non-leading key: server-side sorting

2018-05-09 Thread Maryann Xue
Hi Gerald, I have verified against latest Phoenix code that this problem has been fixed. I have also checked Phoenix 4.13 release tags. Looks like all versions of 4.13 packages now include that fix. Would you mind getting the latest Phoenix-4.13 package and testing it again? Thank you! Thanks, M

  1   2   >