[
https://issues.apache.org/jira/browse/CAMEL-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553864#comment-13553864
]
Willem Jiang commented on CAMEL-5971:
-------------------------------------
It's not a effective to write the hdfs is we close the stream every time.
Maybe we can consider to add a option to make sure the camel-hdfs producer to
close the stream once the producer is finished the processing.
> HdfsOutputStream is not closing
> -------------------------------
>
> Key: CAMEL-5971
> URL: https://issues.apache.org/jira/browse/CAMEL-5971
> Project: Camel
> Issue Type: Bug
> Components: camel-hdfs
> Affects Versions: 2.10.3
> Reporter: Joe Luo
> Assignee: Willem Jiang
>
> I have a simple camel route that takes file from a camel-file consumer
> endpoint and sends to a camel-hdfs producer endpoint:
> <from uri="file:/local/workspace/inbox?delete=true"/>
> <to uri="hdfs://localhost:9000/local/workspace/outbox/file1"/>
> However, my Hadoop server only creates a zero length file "file1.opened"
> unless I stop camel route or a splitting condition is met with a
> "splitStratedy" option added to URI. In above cases, a file called "file1" is
> created with proper contents and the "file1.opened" is disappeared.
> It looks like that close() function of HdfsOutputStream is never called
> unless the camel route/context is stopping or we are splitting the file by
> looking at source code.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira