Joe, 

I’ll checkout the disk-space.  We are running 0.9. If disk space is not the 
issue we’ll give 0.8 a try.

Thanks very much for your quick reply.

Cheers,
Chris



On 3/16/16, 11:04 AM, "Joe Witt" <[email protected]> wrote:

>Chris,
>I have seen that when the diskspace kafka relies on is full.  We've
>seen a number of interesting exceptions recently in testing various
>configurations. But recommend checking that.
>
>Also, what version of Kafka broker are you using?  With Apache NiFi
>0.5.x we moved to the kafka client 0.9.  In doing that we messed up
>support for 0.8.  So...with the upcoming release we will move back to
>the 0.8 client and thus it works great with Kafka 0.8 and 0.9 brokers
>albeit without the new SSL and Kerberos support they added in their
>0.9 work.  We have a JIRA item to go after that for our next feature
>bearing release.
>
>Thanks
>Joe
>
>On Wed, Mar 16, 2016 at 11:01 AM, McDermott, Chris Kevin (MSDU -
>STaTS/StorefrontRemote) <[email protected]> wrote:
>> I say strange because the timeout (63ms) is so very short.  The 
>> communication timeout I’ve set is 30 sec.  Has anyone overseen this?
>>
>> 2016-03-16 14:41:38,227 ERROR [Timer-Driven Process Thread-8] 
>> o.apache.nifi.processors.kafka.PutKafka 
>> PutKafka[id=852c8d42-a2fa-3478-b06b-84ceb6\
>> 6f8b0b] Failed to send 
>> StandardFlowFileRecord[uuid=a0074162-0066-49e7-918b-cea1cfc5a955,claim=StandardContentClaim
>>  [resourceClaim=StandardResour\
>> ceClaim[id=1458079089737-67, container=default, section=67], offset=377796, 
>> length=743],offset=0,name=2349680613178720,size=743] to Kafka; routi\
>> ng to 'failure'; last failure reason reported was 
>> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
>> after 63 ms.;: org.\
>> apache.kafka.common.errors.TimeoutException: Failed to update metadata after 
>> 63 ms.

Reply via email to