[ 
https://issues.apache.org/jira/browse/SQOOP-1617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14199339#comment-14199339
 ] 

Hudson commented on SQOOP-1617:
-------------------------------

SUCCESS: Integrated in Sqoop-ant-jdk-1.6-hadoop200 #948 (See 
[https://builds.apache.org/job/Sqoop-ant-jdk-1.6-hadoop200/948/])
SQOOP-1617: MySQL fetch-size behavior changed with SQOOP-1400 (abraham: 
https://git-wip-us.apache.org/repos/asf?p=sqoop.git&a=commit&h=2de5c850ec4a3ce07ccdfe932ceac7171af7351a)
* src/java/org/apache/sqoop/manager/MySQLManager.java


> MySQL fetch-size behavior changed with SQOOP-1400
> -------------------------------------------------
>
>                 Key: SQOOP-1617
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1617
>             Project: Sqoop
>          Issue Type: Bug
>          Components: connectors/mysql
>    Affects Versions: 1.4.6
>         Environment: CDH 5.2
> sqoop 1.4.5 (seems to include SQOOP-1400)
> mysql connector version 5.1.33
>            Reporter: Jürgen Thomann
>            Assignee: Jarek Jarcec Cecho
>            Priority: Minor
>             Fix For: 1.4.6
>
>         Attachments: SQOOP-1617.patch
>
>
> SQOOP-1400 changed the default behavior for the connector to load everything 
> in memory. The only working way to get the old streaming back is to use 
> --fetch-size -2147483648 (Integer.MIN_VALUE)
> It would be nice if that could be changed and/or documented that mysql does 
> not support a fetch size and does only support row-by-row or loading 
> everything in memory.
> The issue is discussed for example here:
> http://community.cloudera.com/t5/Data-Ingestion-Integration/Sqoop-GC-overhead-limit-exceeded-after-CDH5-2-update/td-p/20604



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to