Hi - We are using kafka_2.11-0.9.0.1. Using the kafka-configs.sh command, we 
set the consumer_byte_rate for a specific client.id that was misbehaving.


The new setting definitely should have led to some throttling.


What we found was that connections from that client.id started failing with the 
following error:


org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'responses': Error reading array of size 554098, only 64 bytes available


Here is the stack trace:


org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'responses': Error reading array of size 554098, only 64 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)


Resetting the consumer_byte_rate to the default for that client.id immediately 
fixed the errors.


I couldn't find anything in Jira that looked similar to this. Has anyone seen 
anything like this before?


Thanks,

Paul

Reply via email to