[
https://issues.apache.org/jira/browse/PHOENIX-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13916561#comment-13916561
]
Mujtaba Chohan edited comment on PHOENIX-34 at 2/28/14 11:19 PM:
-----------------------------------------------------------------
[~maryannxue] I checked with Sqlline as well and got the same exception and
also with java code with explicit resultset close call. Please do check-in your
fix and I'll retest. Thanks.
was (Author: mujtabachohan):
[~maryannxue] I checked with Sqlline as well and got the same exception. Please
do check-in your fix and I'll retest. Thanks.
> Insufficient memory exception on join when RHS rows count > 250K
> -----------------------------------------------------------------
>
> Key: PHOENIX-34
> URL: https://issues.apache.org/jira/browse/PHOENIX-34
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 3.0.0
> Environment: HBase 0.94.14, r1543222, Hadoop 1.0.4, r1393290, 2 RS +
> 1 Master, Heap 4GB per RS
> Reporter: Mujtaba Chohan
> Fix For: 3.0.0
>
>
> Join fails when rows count of RHS table is >250K. Detail on table schema is
> and performance numbers with different LHS/RHS row count is on
> http://phoenix-bin.github.io/client/performance/phoenix-20140210023154.htm.
> James comment:
> So that's with a 4GB heap allowing Phoenix to use 50% of it. With a pretty
> narrow table: 3 KV columns of 30bytes. Topping out at 250K is a bit low. I
> wonder if our memory estimation matches reality.
> What do you think Maryann?
> How about filing a JIRA, Mujtaba. This is a good conversation to have on the
> dev list. Can we move it there, please?
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)