[ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042480#comment-15042480
 ] 

Jakob Homan commented on KAFKA-1980:
------------------------------------

This code's been removed in 0.9.0 and trunk (although similar code exists in 
ReplayLogProducer and is likely vulnerable to this defect as well).  Right now 
there's no plan to make any further releases on the 8.x line.  I don't have a 
problem committing the fix to 0.8.2, but it's unlikely to ever see a release.

> Console consumer throws OutOfMemoryError with large max-messages
> ----------------------------------------------------------------
>
>                 Key: KAFKA-1980
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1980
>             Project: Kafka
>          Issue Type: Bug
>          Components: tools
>    Affects Versions: 0.8.1.1, 0.8.2.0
>            Reporter: HÃ¥kon Hitland
>            Priority: Minor
>         Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages 99999999 | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>       at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>       at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>       at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>       at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>       at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>       at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>       at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>       at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>       at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>       at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>       at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>       at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>       at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>       at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>       at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>       at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>       at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>       at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>       at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>       at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>       at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to