[ 
https://issues.apache.org/jira/browse/PHOENIX-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabriel Reid resolved PHOENIX-333.
----------------------------------

    Resolution: Fixed

Bulk resolve of closed issues imported from GitHub. This status was reached by 
first re-opening all closed imported issues and then resolving them in bulk.

> break down mutations for large data set?
> ----------------------------------------
>
>                 Key: PHOENIX-333
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-333
>             Project: Phoenix
>          Issue Type: Task
>            Reporter: Raymond Liu
>
> in MutationState.commit, the mutations for the whole table will be done in a 
> single hTable.batch operations.  I think this will get problems for e.g. 
> upsert select which run at client side. or when autocommit is off, When the 
> data set is lagre, it will need huge memory and even memory is enough, it 
> probably lead to timeout?
> I think I did encounter outofmemory issue / or GC complain about very few 
> memory to clean etc error upon a actually not very huge table.
> I guess break it down to say scan line size or similar might help on this 
> issue?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to