[
https://issues.apache.org/jira/browse/NIFI-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15991028#comment-15991028
]
ASF GitHub Bot commented on NIFI-3739:
--------------------------------------
Github user joewitt commented on a diff in the pull request:
https://github.com/apache/nifi/pull/1695#discussion_r114149213
--- Diff:
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-10-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumeKafkaRecord_0_10.java
---
@@ -160,6 +163,11 @@
.name("success")
.description("FlowFiles received from Kafka. Depending on
demarcation strategy it is a flow file per message or a bundle of messages
grouped by topic and partition.")
.build();
+ static final Relationship REL_PARSE_FAILURE = new
Relationship.Builder()
--- End diff --
@markap14 this is good but i think we'll need the metadata
(topic,partition,offset) as that would obviously be helpful to folks
troubleshooting such issues.
> Create Processors for publishing records to and consuming records from Kafka
> ----------------------------------------------------------------------------
>
> Key: NIFI-3739
> URL: https://issues.apache.org/jira/browse/NIFI-3739
> Project: Apache NiFi
> Issue Type: New Feature
> Components: Extensions
> Reporter: Mark Payne
> Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> With the new record readers & writers that have been added in now, it would
> be good to allow records to be pushed to and pulled from kafka. Currently, we
> support demarcated data but sometimes we can't correctly demarcate data in a
> way that keeps the format valid (json is a good example). We should have
> processors that use the record readers and writers for this.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)