I see. That makes some sense.

I've been reading about the improvements and it is good to reaffirm the new
version will likely help us out a lot. We do not plan on getting this far
behind again.

[image: Payoff, Inc.]
*Jeremy Farbota*
Software Engineer, Data
Payoff, Inc.

[email protected]
(217) 898-8110 <+2178988110>

On Fri, Jun 23, 2017 at 12:40 PM, Joe Witt <[email protected]> wrote:

> Jeremy
>
> It is possible that backpressure was being engaged in NiFi and causing our
> consumer code to handle it poorly.  We did fix that a while ago and I think
> it ended up in NiFi 1.2.0 (off top of mind anyway).  Between your current
> release and the latest 1.3.0 release a few bugs with those processors have
> been fixed which are quite useful and we've added ones which allow you to
> consume and publish record objects which if you've read about the record
> reader/writer stuff at all I bet you'll find really helpful for your flows.
>
> Thanks
>
> On Fri, Jun 23, 2017 at 3:31 PM, Jeremy Farbota <[email protected]>
> wrote:
>
>> Hello,
>>
>> I'm having issues today with my ConsumeKafka_0_10 processors (Kafka is
>> 0.10.1 (3 nodes) NiFi is 1.0.0 3 node cluster). They are all throwing this
>> error seemingly with each new batch (see attached). We are not seeing
>> errors on other client consumers (clojure, spark).
>>
>> My questions are:
>>
>> 1) Does this error indicate that some offsets might not be getting
>> consumed or does the consumer restart and re-read the offset when the
>> problem occurred? Can I safely ignore this error for the time being since
>> message seem to keep coming through regardless?
>>
>> 2) I reduced the max.poll.records to 10 and I'm still getting this error.
>> I also increased the heap and restarted the service on each node. I got
>> this error shortly after I clicked to look at a provenance event on a
>> processor. I've had an issue in the past where I clicked to look at a
>> provenance event and one node when down from bufferOverload. Is it possible
>> that there is some connection between this error and some background
>> provenance process that I can kill? Is Could this be a memory issue? Is
>> this a known bug with the consumer?
>>
>> We're upgrading to 1.3.0 next week. Is it possible that the upgrade will
>> fix this issue with ConsumeKafka_0_10?
>>
>>
>> [image: Payoff, Inc.]
>> *Jeremy Farbota*
>> Software Engineer, Data
>> Payoff, Inc.
>>
>> [email protected]
>>
>
>

Reply via email to