[jira] [Comment Edited] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-19 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692099#comment-16692099
 ] 

Eugen Feller edited comment on KAFKA-7634 at 11/19/18 6:40 PM:
---

Sure. Punctuate call looks like this:.
{code:java}
override def punctuate(timestamp: Long): KeyValue[Key, Value3] = {
  val iterator: KeyValueIterator[Key, Value3] = myStore.all()

  while (iterator.hasNext) {
val kv = iterator.next()
val key = kv.key
val value = kv.value
// Some small time based updates for the value
ctx.forward(key, value)
myStore.delete(key)
  }

  ctx.commit()
  iterator.close()
  return null
}
{code}


was (Author: efeller):
Sure. Punctuate call looks like this:.
{code:java}
override def punctuate(timestamp: Long): KeyValue[Key, Value3] = {
  val iterator: KeyValueIterator[Key, Value3] = myStore.all()

  while (iterator.hasNext) {
val kv = iterator.next()
val key = kv.key
val value = kv.value
// Some small time based updates for the value
ctx.forward(key, value))
myStore.delete(key)
  }

  ctx.commit()
  iterator.close()
  return null
}
{code}

> Punctuate not being called with merge() and/or outerJoin()
> --
>
> Key: KAFKA-7634
> URL: https://issues.apache.org/jira/browse/KAFKA-7634
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Eugen Feller
>Priority: Major
>
> Hi all,
> I am using the Processor API and having trouble to get Kafka streams 
> v0.11.0.3 call the punctuate() function after a merge() and/or outerJoin(). 
> Specifically, I am having a topology where I am doing flatMapValues() -> 
> merge() and/or outerJoin -> transform(). If I dont call merge() and/or 
> outerJoin() before transform(), punctuate is being called as expected.
> Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-19 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692099#comment-16692099
 ] 

Eugen Feller edited comment on KAFKA-7634 at 11/19/18 6:40 PM:
---

Sure. Punctuate call looks like this:
{code:java}
override def punctuate(timestamp: Long): KeyValue[Key, Value3] = {
  val iterator: KeyValueIterator[Key, Value3] = myStore.all()

  while (iterator.hasNext) {
val kv = iterator.next()
val key = kv.key
val value = kv.value
// Some small time based updates for the value
ctx.forward(key, value)
myStore.delete(key)
  }

  ctx.commit()
  iterator.close()
  return null
}
{code}


was (Author: efeller):
Sure. Punctuate call looks like this:.
{code:java}
override def punctuate(timestamp: Long): KeyValue[Key, Value3] = {
  val iterator: KeyValueIterator[Key, Value3] = myStore.all()

  while (iterator.hasNext) {
val kv = iterator.next()
val key = kv.key
val value = kv.value
// Some small time based updates for the value
ctx.forward(key, value)
myStore.delete(key)
  }

  ctx.commit()
  iterator.close()
  return null
}
{code}

> Punctuate not being called with merge() and/or outerJoin()
> --
>
> Key: KAFKA-7634
> URL: https://issues.apache.org/jira/browse/KAFKA-7634
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Eugen Feller
>Priority: Major
>
> Hi all,
> I am using the Processor API and having trouble to get Kafka streams 
> v0.11.0.3 call the punctuate() function after a merge() and/or outerJoin(). 
> Specifically, I am having a topology where I am doing flatMapValues() -> 
> merge() and/or outerJoin -> transform(). If I dont call merge() and/or 
> outerJoin() before transform(), punctuate is being called as expected.
> Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-19 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690159#comment-16690159
 ] 

Eugen Feller edited comment on KAFKA-7634 at 11/19/18 6:39 PM:
---

Sure, let me try what I can do. Code looks like this:

 
{code:java}
val table =
  bldr
.table[Key, Value1](
  keySerde,
  valueSerde1,
  topicA,
  stateStoreName
)

val stream1 =
  bldr
.stream[Key, Value2](
  keySerde,
  valueSerde2,
  topicB
)
.filterNot((k: Key, s: Value2) => s == null)

val enrichedStream = stream1
  .leftJoin[Value1, Value3](
table,
joiner,
keySerde,
valueSerde2
  )

val explodedStream =
  bldr
.stream[Key, Value4](
  keySerde,
  valueSerde4,
  topicC
)
.flatMapValues[Value3]()

val mergedStream = bldr.merge[Key, Value3](enrichedStream, explodedStream)
mergedStream.transform[Key, Value3](transformer).to(keySerde, valueSerde3, 
outputTopic){code}
 


was (Author: efeller):
Sure, let me try what I can do. Code looks like this:

 
{code:java}
val table =
  bldr
.table[Key, Value1](
  keySerde,
  valueSerde1,
  topicA,
  stateStoreName
)

val stream1 =
  bldr
.stream[Key, Value2](
  keySerde,
  valueSerde2,
  topicB
)
.filterNot((k: Key, s: Value2) => s == null)

val enrichedStream = stream1
  .leftJoin[Value1, Value3](
table,
joiner,
keySerde,
valueSerde2
  )

val explodedStream =
  bldr
.stream[Mac, Value4](
  keySerde,
  valueSerde4,
  topicC
)
.flatMapValues[Value3]()

val mergedStream = bldr.merge[Key, Value3](enrichedStream, explodedStream)
mergedStream.transform[Key, Value3](transformer).to(keySerde, valueSerde3, 
outputTopic){code}
 

> Punctuate not being called with merge() and/or outerJoin()
> --
>
> Key: KAFKA-7634
> URL: https://issues.apache.org/jira/browse/KAFKA-7634
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Eugen Feller
>Priority: Major
>
> Hi all,
> I am using the Processor API and having trouble to get Kafka streams 
> v0.11.0.3 call the punctuate() function after a merge() and/or outerJoin(). 
> Specifically, I am having a topology where I am doing flatMapValues() -> 
> merge() and/or outerJoin -> transform(). If I dont call merge() and/or 
> outerJoin() before transform(), punctuate is being called as expected.
> Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-16 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690159#comment-16690159
 ] 

Eugen Feller commented on KAFKA-7634:
-

Sure, let me try what I can do. Code looks like this:

 
{code:java}
val table =
  bldr
.table[Key, Value1](
  keySerde,
  valueSerde1,
  topicA,
  stateStoreName
)

val stream1 =
  bldr
.stream[Key, Value2](
  keySerde,
  valueSerde2,
  topicB
)
.filterNot((k: Key, s: Value2) => s == null)

val enrichedStream = stream1
  .leftJoin[Value1, Value3](
table,
joiner,
keySerde,
valueSerde2
  )

val explodedStream =
  bldr
.stream[Mac, Value4](
  keySerde,
  valueSerde4,
  topicC
)
.flatMapValues[Value3]()

val mergedStream = bldr.merge[Key, Value3](enrichedStream, explodedStream)
mergedStream.transform[Key, Value3](transformer).to(keySerde, valueSerde3, 
outputTopic){code}
 

> Punctuate not being called with merge() and/or outerJoin()
> --
>
> Key: KAFKA-7634
> URL: https://issues.apache.org/jira/browse/KAFKA-7634
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Eugen Feller
>Priority: Major
>
> Hi all,
> I am using the Processor API and having trouble to get Kafka streams 
> v0.11.0.3 call the punctuate() function after a merge() and/or outerJoin(). 
> Specifically, I am having a topology where I am doing flatMapValues() -> 
> merge() and/or outerJoin -> transform(). If I dont call merge() and/or 
> outerJoin() before transform(), punctuate is being called as expected.
> Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-16 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690046#comment-16690046
 ] 

Eugen Feller commented on KAFKA-7634:
-

[~guozhang] In the meantime I found that if I materialize the merged stream via 
through() and call transform(), it works.

> Punctuate not being called with merge() and/or outerJoin()
> --
>
> Key: KAFKA-7634
> URL: https://issues.apache.org/jira/browse/KAFKA-7634
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Eugen Feller
>Priority: Major
>
> Hi all,
> I am using the Processor API and having trouble to get Kafka streams 
> v0.11.0.3 call the punctuate() function after a merge() and/or outerJoin(). 
> Specifically, I am having a topology where I am doing flatMapValues() -> 
> merge() and/or outerJoin -> transform(). If I dont call merge() and/or 
> outerJoin() before transform(), punctuate is being called as expected.
> Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-16 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690001#comment-16690001
 ] 

Eugen Feller commented on KAFKA-7634:
-

Hi [~guozhang] Thanks a lot for your quick reply. I will give it a try with one 
service. We have a mono repo. Changing the library in a major version would 
require me to change a lot of services. I am currently using 
[https://github.com/manub/scalatest-embedded-kafka] to provide embedded Kafka 
for some unit tests. For others, I am using 
[https://github.com/jpzk/mockedstreams], which is a wrapper around 
TopologyTestDriver. Will try to use the TopologyTestDriver directly to 
reproduce.

> Punctuate not being called with merge() and/or outerJoin()
> --
>
> Key: KAFKA-7634
> URL: https://issues.apache.org/jira/browse/KAFKA-7634
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Eugen Feller
>Priority: Major
>
> Hi all,
> I am using the Processor API and having trouble to get Kafka streams 
> v0.11.0.3 call the punctuate() function after a merge() and/or outerJoin(). 
> Specifically, I am having a topology where I am doing flatMapValues() -> 
> merge() and/or outerJoin -> transform(). If I dont call merge() and/or 
> outerJoin() before transform(), punctuate is being called as expected.
> Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7634) Punctuate not being called with merge() and/or outerJoin()

2018-11-15 Thread Eugen Feller (JIRA)
Eugen Feller created KAFKA-7634:
---

 Summary: Punctuate not being called with merge() and/or outerJoin()
 Key: KAFKA-7634
 URL: https://issues.apache.org/jira/browse/KAFKA-7634
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.3
Reporter: Eugen Feller


Hi all,

I am using the Processor API and having trouble to get Kafka streams v0.11.0.3 
call the punctuate() function after a merge() and/or outerJoin(). Specifically, 
I am having a topology where I am doing flatMapValues() -> merge() and/or 
outerJoin -> transform(). If I dont call merge() and/or outerJoin() before 
transform(), punctuate is being called as expected.

Thank you very much in advance for your help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7430) Improve Transformer interface JavaDoc

2018-09-21 Thread Eugen Feller (JIRA)
Eugen Feller created KAFKA-7430:
---

 Summary: Improve Transformer interface JavaDoc
 Key: KAFKA-7430
 URL: https://issues.apache.org/jira/browse/KAFKA-7430
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.0.0, 1.1.1, 0.11.0.3, 0.10.2.2
Reporter: Eugen Feller


Currently Transformer JavaDoc mentions that it is possible to use both 
ctx.forward() and returning a KeyValue(). It would be great if we could mention 
that returning a KeyValue is merely a convenience thing. In other words, 
everything can be achieved using ctx.forward():

"return new KeyValue()" and "ctx.forward(); return null;" are equivalent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7405) Support for graceful handling of corrupted records in Kafka consumer

2018-09-12 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612623#comment-16612623
 ] 

Eugen Feller commented on KAFKA-7405:
-

[~guozhang] [~mjsax]

> Support for graceful handling of corrupted records in Kafka consumer
> 
>
> Key: KAFKA-7405
> URL: https://issues.apache.org/jira/browse/KAFKA-7405
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, streams
>Affects Versions: 0.10.2.2, 0.11.0.3, 1.1.1
>Reporter: Eugen Feller
>Priority: Major
>
> We have run into issues several times where corrupted records cause the Kafka 
> consumer to throw an error code 2 exception (CRC checksum failure) in the 
> fetch layer. Specifically, when using Kafka streams we run into KAFKA-6977 
> that throws an IllegalStateException and crashes the service. It would be 
> great if the Kafka consumer could be extended with a setting similar to 
> [KIP-161|https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers],
>  that would allow one to gracefully ignore corrupted records.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7405) Support for graceful handling of corrupted records in Kafka consumer

2018-09-12 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-7405:

Description: 
We have run into issues several times where corrupted records cause the Kafka 
consumer to throw an error code 2 exception (CRC checksum failure) in the fetch 
layer. Specifically, when using Kafka streams we run into KAFKA-6977 that 
throws an IllegalStateException and crashes the service. It would be great if 
the Kafka consumer could be extended with a setting similar to 
[KIP-161|https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers],
 that would allow one to gracefully ignore corrupted records.

 

  was:
We have run into issues several times where a corrupted records cause the Kafka 
consumer to throw an error code 2 exception in the fetch layer that can not be 
handled gracefully. Specifically, when using Kafka streams we run into 
KAFKA-6977 that throws an IllegalStateException. It would be great if the Kafka 
consumer could be extended with a setting similar to 
[KIP-161|https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers],
 that would allow one to cleanly ignore corrupted records.

 


> Support for graceful handling of corrupted records in Kafka consumer
> 
>
> Key: KAFKA-7405
> URL: https://issues.apache.org/jira/browse/KAFKA-7405
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, streams
>Affects Versions: 0.10.2.2, 0.11.0.3, 1.1.1
>Reporter: Eugen Feller
>Priority: Major
>
> We have run into issues several times where corrupted records cause the Kafka 
> consumer to throw an error code 2 exception (CRC checksum failure) in the 
> fetch layer. Specifically, when using Kafka streams we run into KAFKA-6977 
> that throws an IllegalStateException and crashes the service. It would be 
> great if the Kafka consumer could be extended with a setting similar to 
> [KIP-161|https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers],
>  that would allow one to gracefully ignore corrupted records.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7405) Support for graceful handling of corrupted records in Kafka consumer

2018-09-12 Thread Eugen Feller (JIRA)
Eugen Feller created KAFKA-7405:
---

 Summary: Support for graceful handling of corrupted records in 
Kafka consumer
 Key: KAFKA-7405
 URL: https://issues.apache.org/jira/browse/KAFKA-7405
 Project: Kafka
  Issue Type: Improvement
  Components: consumer, streams
Affects Versions: 1.1.1, 0.11.0.3, 0.10.2.2
Reporter: Eugen Feller


We have run into issues several times where a corrupted records cause the Kafka 
consumer to throw an error code 2 exception in the fetch layer that can not be 
handled gracefully. Specifically, when using Kafka streams we run into 
KAFKA-6977 that throws an IllegalStateException. It would be great if the Kafka 
consumer could be extended with a setting similar to 
[KIP-161|https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers],
 that would allow one to cleanly ignore corrupted records.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-21 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519546#comment-16519546
 ] 

Eugen Feller commented on KAFKA-6977:
-

[~guozhang] Cool. Makes sense. In my case, I wanted to avoid changing consumer 
group as it would mean data loss. In that case having something like 
[https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams#default-deserialization-exception-handler]
 for consumer would be great.

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Stream thread shutdown complete
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from PENDING_SHUTDOWN to DEAD.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams 

[jira] [Commented] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-18 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516472#comment-16516472
 ] 

Eugen Feller commented on KAFKA-6977:
-

[~guozhang] Thank you very much. Can it be that the app did not manage to 
commit that offset and kept running into same CRC error? I always thought that 
AUTO_OFFSET_RESET_CONFIG kicks in only in case no offsets were previously 
committed on a particular consumer group/app name.

Would be great if we could gracefully handle the CRC checksum errors as part of 
the deserialization exception handler.

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Stream thread shutdown complete
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from PENDING_SHUTDOWN to DEAD.
> 

[jira] [Comment Edited] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-14 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510295#comment-16510295
 ] 

Eugen Feller edited comment on KAFKA-6977 at 6/14/18 5:38 PM:
--

Quick update. I have tried with 1.1.0 client and run into issue. I believe at 
the end two thing helped to make the job stable:

1) Reset offset to latest. That helped with CRC checksum issues.

2) To help with continuous rebalancing I have adjusted max.poll.interval.ms and 
other settings to:
{code:java}
requestTimeout: Duration = 40 seconds,
maxPollInterval: Duration = Duration(Integer.MAX_VALUE, TimeUnit.MILLISECONDS),
maxPollRecords: Long = 500,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 30 seconds,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 1,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L
{code}


was (Author: efeller):
Quick update. I have tried with 1.1.0 client and run into issue. I believe at 
the end two thing helped to make the job stable:

1) Reset offset to latest. That helped with CRC checksum issues.

2) Adjust max.poll.interval.ms and other settings to:
{code:java}
requestTimeout: Duration = 40 seconds,
maxPollInterval: Duration = Duration(Integer.MAX_VALUE, TimeUnit.MILLISECONDS),
maxPollRecords: Long = 500,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 30 seconds,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 1,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L
{code}

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> 

[jira] [Comment Edited] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-14 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510295#comment-16510295
 ] 

Eugen Feller edited comment on KAFKA-6977 at 6/14/18 5:37 PM:
--

Quick update. I have tried with 1.1.0 client and run into issue. I believe at 
the end two thing helped to make the job stable:

1) Reset offset to latest. That helped with CRC checksum issues.

2) Adjust max.poll.interval.ms and other settings to:
{code:java}
requestTimeout: Duration = 40 seconds,
maxPollInterval: Duration = Duration(Integer.MAX_VALUE, TimeUnit.MILLISECONDS),
maxPollRecords: Long = 500,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 30 seconds,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 1,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L
{code}


was (Author: efeller):
Quick update. I have tried with 1.1.0 client and run into issue. I believe at 
the end two thing helped to make the job stable:

1) Reset offset to latest. That helped with CRC checksum issues.

2) Adjust max.poll.interval.ms and other settings to:
{code:java}
requestTimeout: Duration = 40 seconds,
maxPollInterval: Duration = Duration(Integer.MAX_VALUE, TimeUnit.MILLISECONDS),
maxPollRecords: Long = 500,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 30 seconds,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L
{code}

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> 

[jira] [Comment Edited] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-14 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510295#comment-16510295
 ] 

Eugen Feller edited comment on KAFKA-6977 at 6/14/18 5:18 PM:
--

Quick update. I have tried with 1.1.0 client and run into issue. I believe at 
the end two thing helped to make the job stable:

1) Reset offset to latest. That helped with CRC checksum issues.

2) Adjust max.poll.interval.ms and other settings to:
{code:java}
requestTimeout: Duration = 40 seconds,
maxPollInterval: Duration = Duration(Integer.MAX_VALUE, TimeUnit.MILLISECONDS),
maxPollRecords: Long = 500,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 30 seconds,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L
{code}


was (Author: efeller):
Quick update. I have tried with 1.1.0 client and same issue.

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO 

[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-14 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
Shutting down
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
State transition from RUNNING to PENDING_SHUTDOWN.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
producer with timeoutMillis = 9223372036854775807 ms.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
Stream thread shutdown complete
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
State transition from PENDING_SHUTDOWN to DEAD.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] State transition 
from RUNNING to ERROR.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
WARN org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] All stream 
threads have died. The Kafka Streams instance will be in an error state and 
should be closed.
6062195 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
FATAL com.zenreach.data.flows.visitstatsmongoexporter.MongoVisitStatsWriter$ - 
Exiting main on uncaught exception
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at 

[jira] [Commented] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-12 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510295#comment-16510295
 ] 

Eugen Feller commented on KAFKA-6977:
-

Quick update. I have tried with 1.1.0 client and same issue.

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Stream thread shutdown complete
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from PENDING_SHUTDOWN to DEAD.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] State 
> transition from RUNNING to ERROR.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> WARN 

[jira] [Comment Edited] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2018-06-12 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510050#comment-16510050
 ] 

Eugen Feller edited comment on KAFKA-5630 at 6/12/18 7:10 PM:
--

[~becket_qin]  [~hachikuji] Looks like we are running into a similar issue 
using 0.10.2.1 broker and kafka streams client 0.11.0.1. Wonder if this fix 
helps only if broker is also on 0.11.01? This is the my related JIRA 
(https://issues.apache.org/jira/browse/KAFKA-6977)

Thanks.


was (Author: efeller):
[~becket_qin] Looks like we are running into a similar issue using 0.10.2.1 
broker and kafka streams client 0.11.0.1. Wonder if this fix helps only if 
broker is also on 0.11.01? This is the my related JIRA 
(https://issues.apache.org/jira/browse/KAFKA-6977)

Thanks.

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: regression, reliability
> Fix For: 0.11.0.1, 1.0.0
>
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2018-06-12 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510050#comment-16510050
 ] 

Eugen Feller commented on KAFKA-5630:
-

Looks like we are running into a similar issue using 0.10.2.1 broker and kafka 
streams client 0.11.0.1. Wonder if this fix helps only if broker is also on 
0.11.01? This is the my related JIRA 
(https://issues.apache.org/jira/browse/KAFKA-6977)

Thanks.

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: regression, reliability
> Fix For: 0.11.0.1, 1.0.0
>
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2018-06-12 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510050#comment-16510050
 ] 

Eugen Feller edited comment on KAFKA-5630 at 6/12/18 7:10 PM:
--

[~becket_qin] Looks like we are running into a similar issue using 0.10.2.1 
broker and kafka streams client 0.11.0.1. Wonder if this fix helps only if 
broker is also on 0.11.01? This is the my related JIRA 
(https://issues.apache.org/jira/browse/KAFKA-6977)

Thanks.


was (Author: efeller):
Looks like we are running into a similar issue using 0.10.2.1 broker and kafka 
streams client 0.11.0.1. Wonder if this fix helps only if broker is also on 
0.11.01? This is the my related JIRA 
(https://issues.apache.org/jira/browse/KAFKA-6977)

Thanks.

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: regression, reliability
> Fix For: 0.11.0.1, 1.0.0
>
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508902#comment-16508902
 ] 

Eugen Feller commented on KAFKA-6977:
-

I think it was 0.11.0.1 consumer as I have used the kafka-console-consumer CLI 
(from Kafka 0.11.0.1) to do the test. The consumer was running on my local 
machine via VPN. I will give it a try on ECS. I think that topic is being 
consumed by at least one downstream service on ECS. 

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Stream thread shutdown complete
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from PENDING_SHUTDOWN to DEAD.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> 

[jira] [Commented] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508843#comment-16508843
 ] 

Eugen Feller commented on KAFKA-6990:
-

Thanks a lot. For sake of completeness. Logs sent via Slack. :)

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
> metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
>  metadata=''}, 
> sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, metadata=''}} 
> due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_2 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_3 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_4 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
> offset commits 
> 

[jira] [Commented] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508661#comment-16508661
 ] 

Eugen Feller commented on KAFKA-6977:
-

Hi [~mjsax]. Interesting. Thanks a lot. I think the topic itself is likely 
consumed by multiple downstream consumers. However, only this job actually 
consumes it for the purposes of writing out to a database. Just tested 
consuming the topic with plain KafkaConsumer and it works. I also went over all 
the stack traces seen in that context and found the following:
{code:java}
KafkaException: Record for partition our_stats_topic_0-17 at offset 217641273 
is invalid, cause: Record is corrupt (stored crc = 3302026163, computed crc = 
3432237873)
1
at maybeEnsureValid 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1002)
2
at nextFetchedRecord 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1059)
3
at fetchRecords 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1090)
4
at access$1200 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:944)
5
at fetchRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:567)
6
at fetchedRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:528)
7
at pollOnce (org.apache.kafka.clients.consumer.KafkaConsumer.java:1086)
8
at poll (org.apache.kafka.clients.consumer.KafkaConsumer.java:1043)
9
at pollRequests 
(org.apache.kafka.streams.processor.internals.StreamThread.java:536)
10
at runOnce (org.apache.kafka.streams.processor.internals.StreamThread.java:490)
11
at runLoop (org.apache.kafka.streams.processor.internals.StreamThread.java:480)
12
at run (org.apache.kafka.streams.processor.internals.StreamThread.java:457)
KafkaException: Received exception when fetching the next record from 
our_stats_topic_0-17. If needed, please seek past the record to continue 
consumption.
1
at fetchRecords 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1110)
2
at access$1200 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:944)
3
at fetchRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:567)
4
at fetchedRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:528)
5
at pollOnce (org.apache.kafka.clients.consumer.KafkaConsumer.java:1086)
6
at poll (org.apache.kafka.clients.consumer.KafkaConsumer.java:1043)
7
at pollRequests 
(org.apache.kafka.streams.processor.internals.StreamThread.java:536)
8
at runOnce (org.apache.kafka.streams.processor.internals.StreamThread.java:490)
9
at runLoop (org.apache.kafka.streams.processor.internals.StreamThread.java:480)
10
at run (org.apache.kafka.streams.processor.internals.StreamThread.java:457)
{code}
In this particular instance I think the following stack trace was seen at the 
same time with error code 2:

 
{code:java}
CorruptRecordException: Record size is less than the minimum record overhead 
(14)
KafkaException: Received exception when fetching the next record from 
our_stats_topic_0-16. If needed, please seek past the record to continue 
consumption.
1
at fetchRecords 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1076)
2
at access$1200 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:944)
3
at fetchRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:567)
4
at fetchedRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:528)
5
at pollOnce (org.apache.kafka.clients.consumer.KafkaConsumer.java:1086)
6
at poll (org.apache.kafka.clients.consumer.KafkaConsumer.java:1043)
7
at pollRequests 
(org.apache.kafka.streams.processor.internals.StreamThread.java:536)
8
at runOnce (org.apache.kafka.streams.processor.internals.StreamThread.java:490)
9
at runLoop (org.apache.kafka.streams.processor.internals.StreamThread.java:480)
10
at run (org.apache.kafka.streams.processor.internals.StreamThread.java:457)
{code}
I am wondering what could have lead to error code 2 and maybe the above 
ConcurrentRecordException and what would be the best way to mitigate them? Do 
the records get corrupt on the wire somehow?

 

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO 

[jira] [Comment Edited] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508661#comment-16508661
 ] 

Eugen Feller edited comment on KAFKA-6977 at 6/11/18 8:13 PM:
--

Hi [~mjsax]. Interesting. Thanks a lot. I think the topic itself is likely 
consumed by multiple downstream consumers. However, only this job actually 
consumes it for the purposes of writing out to a database. Just tested 
consuming the topic with plain KafkaConsumer and it works. I also went over all 
the stack traces seen in that context and found the following:
{code:java}
KafkaException: Record for partition our_stats_topic_0-17 at offset 217641273 
is invalid, cause: Record is corrupt (stored crc = 3302026163, computed crc = 
3432237873)
1
at maybeEnsureValid 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1002)
2
at nextFetchedRecord 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1059)
3
at fetchRecords 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1090)
4
at access$1200 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:944)
5
at fetchRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:567)
6
at fetchedRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:528)
7
at pollOnce (org.apache.kafka.clients.consumer.KafkaConsumer.java:1086)
8
at poll (org.apache.kafka.clients.consumer.KafkaConsumer.java:1043)
9
at pollRequests 
(org.apache.kafka.streams.processor.internals.StreamThread.java:536)
10
at runOnce (org.apache.kafka.streams.processor.internals.StreamThread.java:490)
11
at runLoop (org.apache.kafka.streams.processor.internals.StreamThread.java:480)
12
at run (org.apache.kafka.streams.processor.internals.StreamThread.java:457)
KafkaException: Received exception when fetching the next record from 
our_stats_topic_0-17. If needed, please seek past the record to continue 
consumption.
1
at fetchRecords 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1110)
2
at access$1200 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:944)
3
at fetchRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:567)
4
at fetchedRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:528)
5
at pollOnce (org.apache.kafka.clients.consumer.KafkaConsumer.java:1086)
6
at poll (org.apache.kafka.clients.consumer.KafkaConsumer.java:1043)
7
at pollRequests 
(org.apache.kafka.streams.processor.internals.StreamThread.java:536)
8
at runOnce (org.apache.kafka.streams.processor.internals.StreamThread.java:490)
9
at runLoop (org.apache.kafka.streams.processor.internals.StreamThread.java:480)
10
at run (org.apache.kafka.streams.processor.internals.StreamThread.java:457)
{code}
In this particular instance I think the following stack trace was seen at the 
same time with error code 2:
{code:java}
CorruptRecordException: Record size is less than the minimum record overhead 
(14)
KafkaException: Received exception when fetching the next record from 
our_stats_topic_0-16. If needed, please seek past the record to continue 
consumption.
1
at fetchRecords 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1076)
2
at access$1200 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:944)
3
at fetchRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:567)
4
at fetchedRecords (org.apache.kafka.clients.consumer.internals.Fetcher.java:528)
5
at pollOnce (org.apache.kafka.clients.consumer.KafkaConsumer.java:1086)
6
at poll (org.apache.kafka.clients.consumer.KafkaConsumer.java:1043)
7
at pollRequests 
(org.apache.kafka.streams.processor.internals.StreamThread.java:536)
8
at runOnce (org.apache.kafka.streams.processor.internals.StreamThread.java:490)
9
at runLoop (org.apache.kafka.streams.processor.internals.StreamThread.java:480)
10
at run (org.apache.kafka.streams.processor.internals.StreamThread.java:457)
{code}
I am wondering what could have lead to error code 2 and maybe the above 
ConcurrentRecordException and what would be the best way to mitigate them? Do 
the records get corrupt on the wire somehow?

 


was (Author: efeller):
Hi [~mjsax]. Interesting. Thanks a lot. I think the topic itself is likely 
consumed by multiple downstream consumers. However, only this job actually 
consumes it for the purposes of writing out to a database. Just tested 
consuming the topic with plain KafkaConsumer and it works. I also went over all 
the stack traces seen in that context and found the following:
{code:java}
KafkaException: Record for partition our_stats_topic_0-17 at offset 217641273 
is invalid, cause: Record is corrupt (stored crc = 3302026163, computed crc = 
3432237873)
1
at maybeEnsureValid 
(org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.java:1002)
2
at nextFetchedRecord 

[jira] [Comment Edited] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508566#comment-16508566
 ] 

Eugen Feller edited comment on KAFKA-6990 at 6/11/18 7:11 PM:
--

Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 50,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
job implements the following logic:
{code:java}
stream().foreach(record => {
// Buffered write this record to MongoDB 
// Await.result(record, 5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (in order 
for the consumer to considered alive) and one that actually calls poll(). In 
that case blocking too long inside foreach should not kick us out of the 
consumer group?

I will try to collect DEBUG level logs today.

 


was (Author: efeller):
Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 50,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
job implements the following logic:
{code:java}
stream().foreach(record => {
// Buffered write this record to MongoDB 
// Await.result(record, 5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,

[jira] [Comment Edited] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508566#comment-16508566
 ] 

Eugen Feller edited comment on KAFKA-6990 at 6/11/18 7:10 PM:
--

Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 50,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
job implements the following logic:
{code:java}
stream().foreach(record => {
// Buffered write this record to MongoDB 
// Await.result(record, 5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 


was (Author: efeller):
Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 50,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
job implements the following logic:
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  metadata=''}, 

[jira] [Comment Edited] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508566#comment-16508566
 ] 

Eugen Feller edited comment on KAFKA-6990 at 6/11/18 7:09 PM:
--

Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 50,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
job implements the following logic:
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 


was (Author: efeller):
Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 1000,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? For 
this particular job max poll records is set to 50. This job implements the 
following logic:
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  

[jira] [Comment Edited] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508566#comment-16508566
 ] 

Eugen Feller edited comment on KAFKA-6990 at 6/11/18 7:08 PM:
--

Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 1000,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? For 
this particular job max poll records is set to 50. This job implements the 
following logic:
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 


was (Author: efeller):
Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 1000,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? For 
this particular job max poll records is set to 50. This job implements the 
following logic:
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
 

Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> 

[jira] [Comment Edited] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508566#comment-16508566
 ] 

Eugen Feller edited comment on KAFKA-6990 at 6/11/18 7:08 PM:
--

Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 1000,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? For 
this particular job max poll records is set to 50. This job implements the 
following logic:
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
 

Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 


was (Author: efeller):
Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 1000,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
particular job does the following logic:

 
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
 

Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> 

[jira] [Commented] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-11 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508566#comment-16508566
 ] 

Eugen Feller commented on KAFKA-6990:
-

Hi [~mjsax]. Thank you very much for your inputs. That makes total sense to me. 
My current defaults are as follows:
{code:java}
requestTimeout: Duration = 30 minutes,
maxPollInterval: Duration = 20 minutes,
maxPollRecords: Long = 1000,
fetchMaxBytes: Long = ConsumerConfig.DEFAULT_FETCH_MAX_BYTES,
fetchMaxWait: Duration = 10 minutes,
sessionTimeout: Duration = 30 seconds,
heartbeatInterval: Duration = 5 seconds,
stateDirCleanupDelay: Long = Long.MaxValue,
numStreamThreads: Long = 1,
numStandbyReplicas: Long = 0,
cacheMaxBytesBuffering: Long = 128 * 1024L * 1024L{code}
Should I further increase the max.poll.interval.ms (currently 20 minutes)? This 
particular job does the following logic:

 
{code:java}
stream().foreach(record => {
// Buffered write to MongoDB 
// Await.result(5 seconds)
}){code}
 

Can it be that writes to Mongo take too long and that leads to problems? I was 
under the impression that by setting heartbeat interval to 5 seconds in Kafka 
0.11.0.1, we should have two threads, one that does heartbeat sending (to 
considered alive) and one that actually calls poll(). In that case blocking too 
long inside foreach should not kick us out of the consumer group?

I will try to collect DEBUG level logs today.

 

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
> metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
>  metadata=''}, 
> sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, metadata=''}} 
> due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_2 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> 

[jira] [Commented] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-04 Thread Eugen Feller (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501100#comment-16501100
 ] 

Eugen Feller commented on KAFKA-6990:
-

I have also noticed a lot messages like this in that job:
{code:java}
00:56:28.894 [visits-e80b42d2-6c8c-4228-aaca-f7b15fdd410b-StreamThread-1] TRACE 
o.a.k.c.consumer.internals.Fetcher - Skipping fetch for partition 
visits_enrichedsighting_byclientmac_0-7 because there is an in-flight request 
to 10.16.210.152:9092 (id: 3 rack: us-west-2c)
00:56:28.894 [visits-e80b42d2-6c8c-4228-aaca-f7b15fdd410b-StreamThread-1] TRACE 
o.a.k.c.consumer.internals.Fetcher - Skipping fetch for partition 
visits_enrichedsighting_byclientmac_0-9 because there is an in-flight request 
to 10.16.208.24:9092 (id: 9 rack: us-west-2a)
{code}

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
> metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
>  metadata=''}, 
> sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, metadata=''}} 
> due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_2 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_3 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
>  metadata=''}} due to CommitFailedException
> 

[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-04 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Affects Version/s: (was: 0.10.2.1)
   0.11.0.1

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Stream thread shutdown complete
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from PENDING_SHUTDOWN to DEAD.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] State 
> transition from RUNNING to ERROR.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> WARN org.apache.kafka.streams.KafkaStreams - stream-client 
> 

[jira] [Updated] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-04 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6990:

Affects Version/s: (was: 0.10.2.1)
   0.11.0.1

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
> metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
>  metadata=''}, 
> sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, metadata=''}} 
> due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_2 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_3 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_4 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-5=OffsetAndMetadata{offset=26,
>  

[jira] [Updated] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-04 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6990:

Description: 
We are seeing a lot of CommitFailedExceptions on one of our Kafka stream apps. 
Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error message:
{code:java}
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
org.apache.kafka.streams.KafkaStreams - stream-client 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from REBALANCING 
to RUNNING.
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
offset commits {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_0 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_1 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
 metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
 metadata=''}, sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_2 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
offset commits {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_3 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
offset commits {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_4 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-5=OffsetAndMetadata{offset=26,
 metadata=''}, sightings_sighting_byclientmac_0-5=OffsetAndMetadata{offset=16, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_5 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_6] Failed 
offset commits 

[jira] [Updated] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-04 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6990:

Description: 
We are seeing a lot of CommitFailedExceptions on one of our Kafka stream apps. 
Running Kafka Streams 0.10.2.1 and Kafka Broker 0.10.2.1. Full error message:
{code:java}
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
org.apache.kafka.streams.KafkaStreams - stream-client 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from REBALANCING 
to RUNNING.
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
offset commits {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_0 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_1 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
 metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
 metadata=''}, sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_2 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
offset commits {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_3 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
offset commits {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_4 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-5=OffsetAndMetadata{offset=26,
 metadata=''}, sightings_sighting_byclientmac_0-5=OffsetAndMetadata{offset=16, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_5 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_6] Failed 
offset commits 

[jira] [Updated] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-04 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6990:

Description: 
We are seeing a lot of CommitFailedExceptions on one of our Kafka stream apps. 
Running Kafka Streams 0.10.2.1 and Kafka Broker 0.10.2.1. Full error message:
{code:java}
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
org.apache.kafka.streams.KafkaStreams - stream-client 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from REBALANCING 
to RUNNING.
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
offset commits {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_0 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_1 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
 metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
 metadata=''}, sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_2 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
offset commits {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_3 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
offset commits {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_4 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-5=OffsetAndMetadata{offset=26,
 metadata=''}, sightings_sighting_byclientmac_0-5=OffsetAndMetadata{offset=16, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_5 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_6] Failed 
offset commits 

[jira] [Created] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-06-04 Thread Eugen Feller (JIRA)
Eugen Feller created KAFKA-6990:
---

 Summary: CommitFailedException; this task may be no longer owned 
by the thread
 Key: KAFKA-6990
 URL: https://issues.apache.org/jira/browse/KAFKA-6990
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.1
Reporter: Eugen Feller


We are seeing a lot of CommitFailedExceptions on one of our Kafka clusters. 
Running Kafka Streams 0.10.2.1 and Kafka Broker 0.10.2.1. Full error message:
{code:java}
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
org.apache.kafka.streams.KafkaStreams - stream-client 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from REBALANCING 
to RUNNING.
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
offset commits {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_0 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_1 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
 metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
 metadata=''}, sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_2 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
offset commits {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_3 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
offset commits {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, 
metadata=''}, 
mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
 metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_4 during commit state due to CommitFailedException; this task may 
be no longer owned by the thread
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
offset commits 
{mactocontactmappings_contactmapping_byclientmac_0-5=OffsetAndMetadata{offset=26,
 metadata=''}, sightings_sighting_byclientmac_0-5=OffsetAndMetadata{offset=16, 
metadata=''}} due to CommitFailedException
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
[visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
stream task 0_5 during commit state due to CommitFailedException; this task may 
be no longer owned by the 

[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
Shutting down
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
State transition from RUNNING to PENDING_SHUTDOWN.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
producer with timeoutMillis = 9223372036854775807 ms.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
Stream thread shutdown complete
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
State transition from PENDING_SHUTDOWN to DEAD.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] State transition 
from RUNNING to ERROR.
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
WARN org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] All stream 
threads have died. The Kafka Streams instance will be in an error state and 
should be closed.
6062195 
[visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
FATAL com.zenreach.data.flows.visitstatsmongoexporter.MongoVisitStatsWriter$ - 
Exiting main on uncaught exception
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at 

[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Labels: streams  (was: Streaming)

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly 
> run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> {code}
> Before giving that exception, our Kafka streams job keeps rebalancing and 
> rebalancing. This is a simple job that reads data of Kafka and writes to 
> MongoDB. Any idea what could be going wrong? 
> Thanks a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Summary:  Kafka Streams - java.lang.IllegalStateException: Unexpected error 
code 2 while fetching data  (was:  Kafka Streams - Unexpected error code 2 
while fetching data)

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly 
> run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> {code}
> Before giving that exception, our Kafka streams job keeps rebalancing and 
> rebalancing. This is a simple job that reads data of Kafka and writes to 
> MongoDB. Any idea what could be going wrong? 
> Thanks a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
{code}
Looks like the error is thrown here 
https://github.com/apache/kafka/blob/0.11.0.1/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L886

Before giving that exception, our Kafka streams job keeps rebalancing and 
rebalancing. This is a simple job that reads data of Kafka and writes to 
MongoDB. Any idea what could be going wrong? 

Thanks a lot.

  was:
We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly run 
into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 

[jira] [Updated] (KAFKA-6977) Kafka Streams - Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly run 
into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
{code}
Before giving that exception, our Kafka streams job keeps rebalancing and 
rebalancing. This is a simple job that reads data of Kafka and writes to 
MongoDB. Any idea what could be going wrong? 

Thanks a lot.

  was:
We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly run 
into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 

[jira] [Updated] (KAFKA-6977) Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly run 
into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
{code}
Before giving that exception, our Kafka streams job keeps rebalancing and 
rebalancing. This is a simple job that reads data of Kafka and writes to 
MongoDB.

  was:
We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly run 
into the following exception:

 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: 

[jira] [Created] (KAFKA-6977) Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)
Eugen Feller created KAFKA-6977:
---

 Summary:  Unexpected error code 2 while fetching data
 Key: KAFKA-6977
 URL: https://issues.apache.org/jira/browse/KAFKA-6977
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.1
Reporter: Eugen Feller


We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly run 
into the following exception:

 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
{code}
Looks like the error is thrown here 
[https://github.com/apache/kafka/blob/0.11.0.1/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L886]

Before giving that exception, our Kafka streams job keeps rebalancing and 
rebalancing. This is a simple job that reads data of Kafka and writes it back 
to MongoDB. It reads from a topic of 32 partitions and runs on AWS ECS with 32 
instances (each with one stream thread). Any idea what could be going wrong? 

Thanks a lot.

  was:
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 

[jira] [Updated] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Description: 
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
java.lang.IllegalStateException: Unexpected error code 2 while fetching data
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
{code}
Looks like the error is thrown here 
[https://github.com/apache/kafka/blob/0.11.0.1/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L886]

Before giving that exception, our Kafka streams job keeps rebalancing and 
rebalancing. This is a simple job that reads data of Kafka and writes it back 
to MongoDB. Any idea what could be going wrong? 

Thanks a lot.

  was:
We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and constantly 
run into the following exception: 
{code:java}
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
partition assignment took 40 ms.
current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 0_18, 
0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
current standby tasks: []
previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
State transition from PARTITIONS_ASSIGNED to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
INFO org.apache.kafka.streams.KafkaStreams - stream-client 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State transition 
from REBALANCING to RUNNING.
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
ERROR org.apache.kafka.streams.processor.internals.StreamThread - stream-thread 
[visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
Encountered the following error during processing:
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at 

[jira] [Updated] (KAFKA-6977) Kafka Streams - Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Summary:  Kafka Streams - Unexpected error code 2 while fetching data  
(was:  Unexpected error code 2 while fetching data)

>  Kafka Streams - Unexpected error code 2 while fetching data
> 
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: Streaming
>
> We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly 
> run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> {code}
> Before giving that exception, our Kafka streams job keeps rebalancing and 
> rebalancing. This is a simple job that reads data of Kafka and writes to 
> MongoDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6977) Unexpected error code 2 while fetching data

2018-05-31 Thread Eugen Feller (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugen Feller updated KAFKA-6977:

Labels: Streaming  (was: )

>  Unexpected error code 2 while fetching data
> 
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: Streaming
>
> We are running Kafka Streams 0.11 with Kafka Broker 0.10.2.1 and constantly 
> run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> {code}
> Before giving that exception, our Kafka streams job keeps rebalancing and 
> rebalancing. This is a simple job that reads data of Kafka and writes to 
> MongoDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)