[ https://issues.apache.org/jira/browse/SQOOP-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522513#comment-14522513 ]
Yibing Shi commented on SQOOP-2343: ----------------------------------- Thank you for looking at this, [~jarcec]! No matter whether this is a mapreduce bug or not, adding some defensive code in {{close}} method can solve the problem with old version of mapreduce, and it won't do any harm. > AsyncSqlRecordWriter stucks if any exception is thrown out in its close method > ------------------------------------------------------------------------------ > > Key: SQOOP-2343 > URL: https://issues.apache.org/jira/browse/SQOOP-2343 > Project: Sqoop > Issue Type: Bug > Components: connectors > Affects Versions: 1.4.5 > Reporter: Yibing Shi > Attachments: SQOOP-2343.patch > > > In class {{AsyncSqlRecordWriter}}, if any exception is thrown in its close > method, the Hadoop MapTask will call this close method once more in case it > hasn't been closed. Please see below code snippet (in method runNewMapper): > {code} > try { > input.initialize(split, mapperContext); > mapper.run(mapperContext); > mapPhase.complete(); > setPhase(TaskStatus.Phase.SORT); > statusUpdate(umbilical); > input.close(); > input = null; > output.close(mapperContext); > output = null; > } finally { > closeQuietly(input); > closeQuietly(output, mapperContext); > } > {code} > The second time the close method is called, the main thread will stuck in > executeUpdate when trying to put a new dbOp into the synchronous queue, > because at this moment the worker thread has ended and thus not receiver will > take that object, which makes the putter (main thread) stuck. -- This message was sent by Atlassian JIRA (v6.3.4#6332)