Here is the exception I'm getting with 4GB heap x 2RS on 500K RHS +
resultset.close();

By the way, data for my test is at
http://phoenix-bin.github.io/client/test/join.tar.gz expand in phoenix/bin
directory and run test.sh. Query is in QRY.sql.

Exception:
Requested memory of 110432755 bytes is larger than global pool of 90507264
bytes.
at
org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:72)
at
org.apache.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:32)
at
org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:142)
at
org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:91)
at
org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:68)
at
org.apache.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:61)
at
org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:85)
at
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:5634)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3924)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)


On Thu, Feb 13, 2014 at 12:03 PM, James Taylor (JIRA) <[email protected]>wrote:

>
>     [
> https://issues.apache.org/jira/browse/PHOENIX-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13900675#comment-13900675]
>
> James Taylor commented on PHOENIX-34:
> -------------------------------------
>
> bq. I suspect that the reason why Mujtaba was getting different result was
> due to the test driver he used. Coz I found a problem in PhoenixRuntime,
> which didn't not seem to have called "Resultset.close()" and hence did not
> release the memory assigned for hash cache.
>
> Has this been JIRA'ed and fixed? [~mujtabachohan] - can you verify, please?
>
> bq. But when RHS size reached 2M (should be roughly of 400M in size)
> number of rows, the Region Server just crashed.
>
> Can you try decreasing the QueryServicesOptions.DEFAULT_MAX_MEMORY_PERC
> down to 20 and rerun these tests? I want to make sure the client gets an
> InsufficientMemoryException instead of the RS crashing.
>
>
>
> > Insufficient memory exception on join when RHS rows count > 250K
> > -----------------------------------------------------------------
> >
> >                 Key: PHOENIX-34
> >                 URL: https://issues.apache.org/jira/browse/PHOENIX-34
> >             Project: Phoenix
> >          Issue Type: Bug
> >    Affects Versions: 3.0.0
> >         Environment: HBase 0.94.14, r1543222, Hadoop 1.0.4, r1393290, 2
> RS + 1 Master, Heap 4GB per RS
> >            Reporter: Mujtaba Chohan
> >             Fix For: 3.0.0
> >
> >
> > Join fails when rows count of RHS table is >250K. Detail on table schema
> is and performance numbers with different LHS/RHS row count is on
> http://phoenix-bin.github.io/client/performance/phoenix-20140210023154.htm
> .
> > James comment:
> > So that's with a 4GB heap allowing Phoenix to use 50% of it. With a
> pretty narrow table: 3 KV columns of 30bytes. Topping out at 250K is a bit
> low. I wonder if our memory estimation matches reality.
> > What do you think Maryann?
> > How about filing a JIRA, Mujtaba. This is a good conversation to have on
> the dev list. Can we move it there, please?
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.1.5#6160)
>

Reply via email to