[
https://issues.apache.org/jira/browse/NIFI-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989251#comment-15989251
]
Mark Payne commented on NIFI-3739:
----------------------------------
OK [~joewitt] thanks for testing! I pushed a new commit, which I believe
addresses all of the points above. The Hwx registry service was properly
caching the schema text but no the schema version. I addressed that, and I
fixed the issue with writing MAP record fields with Avro. I am seeing several
hundred messages per FlowFile - your lower number could perhaps be limited by
the "Max Uncommitted Time" property? I also addressed the Exception that was
getting thrown, indicating that the Producer was already closed. Give it a try
if you get a chance and let me know if you run into any other problems.
> Create Processors for publishing records to and consuming records from Kafka
> ----------------------------------------------------------------------------
>
> Key: NIFI-3739
> URL: https://issues.apache.org/jira/browse/NIFI-3739
> Project: Apache NiFi
> Issue Type: New Feature
> Components: Extensions
> Reporter: Mark Payne
> Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> With the new record readers & writers that have been added in now, it would
> be good to allow records to be pushed to and pulled from kafka. Currently, we
> support demarcated data but sometimes we can't correctly demarcate data in a
> way that keeps the format valid (json is a good example). We should have
> processors that use the record readers and writers for this.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)