[ 
https://issues.apache.org/jira/browse/FLUME-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13914901#comment-13914901
 ] 

ASF subversion and git services commented on FLUME-2320:
--------------------------------------------------------

Commit 2fc69f54b45588a40fca06942d4530eb17ce51a0 in flume's branch 
refs/heads/flume-1.5 from [~hshreedharan]
[ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=2fc69f5 ]

FLUME-2320. Fixed Deadlock in DatasetSink

(Ryan Blue via Hari Shreedharan)


> Deadlock in DatasetSink
> -----------------------
>
>                 Key: FLUME-2320
>                 URL: https://issues.apache.org/jira/browse/FLUME-2320
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.5.0
>            Reporter: Ryan Blue
>            Assignee: Ryan Blue
>         Attachments: 
> 0002-FLUME-2320-Fix-deadlock-by-removing-writer-lock.patch, 
> FLUME-2320-1.patch, FLUME-2320-2.patch
>
>
> Lines 
> [251-252|https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-dataset-sink/src/main/java/org/apache/flume/sink/kite/DatasetSink.java#L251]
>  in the DatasetSink are a potential deadlock: if the transaction throws an 
> exception, then the writer lock is not released, but the same thread tries to 
> lock in the error handling.
> While the simplest solution is to move those two lines inside the try/finally 
> statement, I think we can actually remove the lock completely by reverting to 
> the original version that rolled the files in the process() method. The 
> original concern about that design was that there needs to be some guarantee 
> that all files will be rolled. Because the SinkRunner has a max backoff, 
> there is a guaranteed maximum amount of time between calls to process.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to