Hard to say what the root cause is.

If you get `OutOfMemoryError`, it seems the you need to increase the
memory to provide to the JVM? Does this happen before or after the log
entry?

Also, can you verify that your Kafka cluster is healthy?


-Matthias

On 12/28/18 4:30 PM, 徐华 wrote:
> Hi,
> 
> When my application run some time ,there will find a info :
> 
> No checkpoint found for task 12_1 state store user-share-store-minute
> changelog flash-app-bch-user-share-store-minute-changelog-1 with EOS turned
> on. Reinitializing the task and restore its state from the beginning.
> 
> but ,the application cannot re-initialize the task ,info like(my kafka have
> 3 node):
> 
> Node 1 was unable to process the fetch request with (sessionId=1872699174,
> epoch=119): INVALID_FETCH_SESSION_EPOCH.
> Node 2 was unable to process the fetch request with (sessionId=2143576707,
> epoch=119): INVALID_FETCH_SESSION_EPOCH.
> Node 3 was unable to process the fetch request with (sessionId=2143576707,
> epoch=119): INVALID_FETCH_SESSION_EPOCH.
> 
> Group coordinator ap-1001-kafka-prod-ali-hk001:9092 (id: 2147483647 rack:
> null) is unavailable or invalid, will attempt rediscovery
> 
> So ,finally ,waiting a long time ,stream cannot restore.
> 
> All stream have to shutdown:
> 
> Sometimes ,there will have a GC error:
> 
> Error while processing:
> java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit
> exceeded
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to