[ 
https://issues.apache.org/jira/browse/NIFI-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989764#comment-15989764
 ] 

Joseph Witt commented on NIFI-3739:
-----------------------------------

[~markap14] ok so i've tested it quite a bit.  Things are looking pretty good.  
The previous observation needs to be looked into which is in the event of an 
error parsing data against a given schema it will fail and complains about 
handling an older flowfile reference so there is just some logic/reference 
issue in there.

The other thing is that the behavior in terms of batching records together is 
definitely not nearly as consistent as the raw consume processor.  I routinely 
see 1, 2, 3 or in general few records per pull from kafka.  I think we can 
evaluate this later and optimize it if necessary but it is something i'm seeing.

> Create Processors for publishing records to and consuming records from Kafka
> ----------------------------------------------------------------------------
>
>                 Key: NIFI-3739
>                 URL: https://issues.apache.org/jira/browse/NIFI-3739
>             Project: Apache NiFi
>          Issue Type: New Feature
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.2.0
>
>
> With the new record readers & writers that have been added in now, it would 
> be good to allow records to be pushed to and pulled from kafka. Currently, we 
> support demarcated data but sometimes we can't correctly demarcate data in a 
> way that keeps the format valid (json is a good example). We should have 
> processors that use the record readers and writers for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to