[ 
https://issues.apache.org/jira/browse/NIFI-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988069#comment-15988069
 ] 

Joseph Witt commented on NIFI-3739:
-----------------------------------

that appears to be a handling error for errors but the root issue was

2017-04-27 22:19:40,327 ERROR [Timer-Driven Process Thread-7] 
o.a.n.p.k.pubsub.PublishKafkaRecord_0_10 
PublishKafkaRecord_0_10[id=b24fcf7f-015b-1000-483b-27e62dcb54dd] Failed to send 
all mess
age for 
StandardFlowFileRecord[uuid=a02d2c58-d678-43d4-ad6d-cf63889abf22,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1493345946912-1, container=default, 
section=1], off
set=0, length=874200],offset=0,name=445255826618354,size=874200] to Kafka; 
routing to failure due to 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert v
alue {path=./, filename=445238586773886, 
uuid=6cd03301-cc18-4411-94ee-e75fa03cd1cb} of type class java.util.HashMap to a 
Map: {}
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value {path=./, filename=445238586773886, 
uuid=6cd03301-cc18-4411-94ee-e75fa03cd1cb} of type class jav
a.util.HashMap to a Map
        at 
org.apache.nifi.avro.WriteAvroResult.convertToAvroObject(WriteAvroResult.java:146)
        at 
org.apache.nifi.avro.WriteAvroResult.createAvroRecord(WriteAvroResult.java:69)
        at 
org.apache.nifi.avro.WriteAvroResultWithExternalSchema.write(WriteAvroResultWithExternalSchema.java:85)
        at 
org.apache.nifi.processors.kafka.pubsub.PublisherLease.publish(PublisherLease.java:107)
        at 
org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_0_10$1.process(PublishKafkaRecord_0_10.java:340)
        at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2120)
        at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2090)
        at 
org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_0_10.onTrigger(PublishKafkaRecord_0_10.java:335)
        at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
        at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1118)
        at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:144)
        at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
        at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

> Create Processors for publishing records to and consuming records from Kafka
> ----------------------------------------------------------------------------
>
>                 Key: NIFI-3739
>                 URL: https://issues.apache.org/jira/browse/NIFI-3739
>             Project: Apache NiFi
>          Issue Type: New Feature
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.2.0
>
>
> With the new record readers & writers that have been added in now, it would 
> be good to allow records to be pushed to and pulled from kafka. Currently, we 
> support demarcated data but sometimes we can't correctly demarcate data in a 
> way that keeps the format valid (json is a good example). We should have 
> processors that use the record readers and writers for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to