[ 
https://issues.apache.org/jira/browse/NIFI-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15990535#comment-15990535
 ] 

Joseph Witt commented on NIFI-3739:
-----------------------------------

[~markap14] So things left I believe
1. If we fail to parse data against a given schema it will currently fail and 
when failing it will complain about an improper handling against an old flow 
file.
2. once #1 is fixed we also need to determine how best to handle schema parse 
fails.  We can either skip that message or we can route it to failure with all 
other failed messages in a given pass in some sort of BytesRecord structure 
(single flowfile per fail would be very poor performing as it is likely that 
failures will happen in groups) or we can reset the kafka offset to avoid ever 
skipping data but also it could fail to progress until the problematic message 
is accounted for in the schema.  Perhaps these should be options for the user 
to select.


> Create Processors for publishing records to and consuming records from Kafka
> ----------------------------------------------------------------------------
>
>                 Key: NIFI-3739
>                 URL: https://issues.apache.org/jira/browse/NIFI-3739
>             Project: Apache NiFi
>          Issue Type: New Feature
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.2.0
>
>
> With the new record readers & writers that have been added in now, it would 
> be good to allow records to be pushed to and pulled from kafka. Currently, we 
> support demarcated data but sometimes we can't correctly demarcate data in a 
> way that keeps the format valid (json is a good example). We should have 
> processors that use the record readers and writers for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to