[
https://issues.apache.org/jira/browse/SQOOP-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14218711#comment-14218711
]
Hudson commented on SQOOP-1125:
-------------------------------
FAILURE: Integrated in Sqoop-ant-jdk-1.6-hadoop100 #916 (See
[https://builds.apache.org/job/Sqoop-ant-jdk-1.6-hadoop100/916/])
SQOOP-1125: Out of memory errors when number of records to import < 0.5 *
splitSize (jarcec:
https://git-wip-us.apache.org/repos/asf?p=sqoop.git&a=commit&h=2b4e4d9bf04e1b5686dbe54c467a816bf4a11a3e)
* src/java/org/apache/sqoop/mapreduce/db/BigDecimalSplitter.java
* src/test/org/apache/sqoop/mapreduce/db/TestBigDecimalSplitter.java
> Out of memory errors when number of records to import < 0.5 * splitSize
> -----------------------------------------------------------------------
>
> Key: SQOOP-1125
> URL: https://issues.apache.org/jira/browse/SQOOP-1125
> Project: Sqoop
> Issue Type: Bug
> Affects Versions: 1.4.3
> Reporter: Dave Kincaid
> Assignee: Sai Karthik Ganguru
> Priority: Critical
> Labels: newbie
> Fix For: 1.4.6
>
> Attachments: sqoop_final.patch
>
>
> We are getting out of memory errors during import if the number of records to
> import is less than 0.5*splitSize (and is nonterminating decimal).
> For example, if the numSplits = 3, minVal = 100, maxVal = 101 then in
> BigDecimalSplitter.split() an extraordinary number of tiny values will be
> added to the splits List and run out of memory eventually.
> I also noticed that there are no tests for BigDecimalSplitter.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)