[
https://issues.apache.org/jira/browse/SQOOP-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010475#comment-14010475
]
Gwen Shapira commented on SQOOP-934:
------------------------------------
Andrey,
Indeed, the patch does not solve the problem for Oracle.
It fixes the issue for the genericJDBC connection manager, but unfortunately
the OracleManager (which implements its own connection pool) was not fixed in
this patch.
Since this JIRA is already marked as resolved, it will be nice if you can open
a new JIRA specifically for the Oracle case.
As a work-around, I'd use Oraoop - which inherits from the generic connection
manager (where the problem is resolved) without implementing its own connection
pool.
> JDBC Connection can timeout after import but before hive import
> ---------------------------------------------------------------
>
> Key: SQOOP-934
> URL: https://issues.apache.org/jira/browse/SQOOP-934
> Project: Sqoop
> Issue Type: Improvement
> Affects Versions: 1.4.2
> Reporter: Jarek Jarcec Cecho
> Assignee: Raghav Kumar Gautam
> Fix For: 1.4.4
>
> Attachments: SQOOP-934-2.patch, SQOOP-934.patch
>
>
> Our current [import
> rutine|https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/tool/ImportTool.java#L385]
> imports data into HDFS and then tries to do Hive import. As the connection
> to the remote server is opened only once at the begging it might timeout
> during very long mapreduce job. I believe that we should ensure that the
> connection is still valid before performing the hive import.
--
This message was sent by Atlassian JIRA
(v6.2#6252)