I tried an 0.9.0 broker and I got the same error. Not sure if it makes a
difference, but I'm using Confluent platform 2.0 and 3.0 for this testing.

On Wed, Jun 8, 2016 at 5:20 PM Jesse Anderson <[email protected]> wrote:

> Still open to screensharing and resolving over a hangout.
>
> On Wed, Jun 8, 2016 at 5:19 PM Raghu Angadi <[email protected]> wrote:
>
>> On Wed, Jun 8, 2016 at 1:56 PM, Jesse Anderson <[email protected]>
>> wrote:
>>
>>> [pool-2-thread-1] INFO org.apache.beam.sdk.io.kafka.KafkaIO - Reader-0:
>>> resuming eventsim-0 at default offset
>>>
>> [...]
>>>
>> [pool-2-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser -
>>> Kafka commitId : 23c69d62a0cabf06
>>> [pool-2-thread-1] WARN org.apache.beam.sdk.io.kafka.KafkaIO - Reader-0:
>>> getWatermark() : no records have been read yet.
>>> [pool-77-thread-1] INFO org.apache.beam.sdk.io.kafka.KafkaIO - Reader-0: 
>>> Returning
>>> from consumer pool loop
>>> [pool-78-thread-1] WARN org.apache.beam.sdk.io.kafka.KafkaIO - Reader-0:
>>> exception while fetching latest offsets. ignored.
>>>
>>
>> this reader is closed before the exception. The exception is due to an
>> action during close and can be ignored. The main question is why this was
>> closed...
>>
>

Reply via email to