[ https://issues.apache.org/jira/browse/PHOENIX-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133004#comment-15133004 ]
maghamravikiran commented on PHOENIX-2649: ------------------------------------------ Thanks [~sergey.soldatov] for the contribution. One minor nit : The static * import . One of us during checkin will address it. {code} import static org.apache.hadoop.hbase.util.Bytes.*; {code} [~gabriel.reid], [~giacomotaylor] Can I have a go ahead from one of you before the patch is pushed. > GC/OOM during BulkLoad > ---------------------- > > Key: PHOENIX-2649 > URL: https://issues.apache.org/jira/browse/PHOENIX-2649 > Project: Phoenix > Issue Type: Bug > Affects Versions: 4.7.0 > Environment: Mac OS, Hadoop 2.7.2, HBase 1.1.2 > Reporter: Sergey Soldatov > Assignee: maghamravikiran > Priority: Critical > Fix For: 4.7.0 > > Attachments: PHOENIX-2649-1.patch, PHOENIX-2649-2.patch, > PHOENIX-2649-3.patch, PHOENIX-2649.patch > > > Phoenix fails to complete bulk load of 40Mb csv data with GC heap error > during Reduce phase. The problem is in the comparator for TableRowkeyPair. It > expects that the serialized value was written using zero-compressed encoding, > but at least in my case it was written in regular way. So, trying to obtain > length for table name and row key it always get zero and reports that those > byte arrays are equal. As the result, the reducer receives all data produced > by mappers in one reduce call and fails with OOM. -- This message was sent by Atlassian JIRA (v6.3.4#6332)