Re: exception in log cleaner

2016-02-04 Thread Chen Song
(source.log.readInto(readBuffer, position)) for (entry <- messages.shallowIterator) { ByteBufferMessageSet.writeMessage(writeBuffer, entry.message, entry.offset) On Thu, Jan 21, 2016 at 3:13 PM, Chen Song <chen.song...@gmail.com> wrote: > I am testing compact policy on a t

Re: questions on number of partitions

2015-11-20 Thread Chen Song
Any thoughts on this topic. Not sure if others have seen the same spike as us. On Tue, Nov 17, 2015 at 3:51 PM, Chen Song <chen.song...@gmail.com> wrote: > BTW, we are running Kafka 0.8.2.2. > > On Tue, Nov 17, 2015 at 3:48 PM, Chen Song <chen.song...@gmail.com> wrote: &

questions on number of partitions

2015-11-17 Thread Chen Song
broker CPU usage. However, this seems to be a limiting factor to parallelism (mostly for consumer perspective) because all I test is just 400 partitions. Not sure if this is being addressed in an way. Any thoughts on this? -- Chen Song

Re: Optimal number of partitions for topic

2015-11-17 Thread Chen Song
ater > >>> # parallelism for consumption, but this will also result in more files > >>> across > >>> # the brokers. > >>> num.partitions=16 > >>> > >>> # The minimum age of a log file to be eligible for deletion > >>> log.retention.hours=1 > >>> > >>> # The maximum size of a log segment file. When this size is reached a > new > >>> log segment will be created. > >>> log.segment.bytes=536870912 > >>> > >>> # The interval at which log segments are checked to see if they can be > >>> deleted according > >>> # to the retention policies > >>> log.retention.check.interval.ms=6 > >>> > >>> # By default the log cleaner is disabled and the log retention policy > >> will > >>> default to just delete segments after their retention expires. > >>> # If log.cleaner.enable=true is set the cleaner will be enabled and > >>> individual logs can then be marked for log compaction. > >>> log.cleaner.enable=false > >>> > >>> # Timeout in ms for connecting to zookeeper > >>> zookeeper.connection.timeout.ms=100 > >>> > >>> auto.leader.rebalance.enable=true > >>> controlled.shutdown.enable=true > >>> > >>> > >>> Thanks in advance. > >>> > >>> > >>> > >>> Carles Sistare > >>> > >>> > >>> > >> > >> > >> -- > >> http://khangaonkar.blogspot.com/ > >> > > > > -- Chen Song

Re: questions on number of partitions

2015-11-17 Thread Chen Song
BTW, we are running Kafka 0.8.2.2. On Tue, Nov 17, 2015 at 3:48 PM, Chen Song <chen.song...@gmail.com> wrote: > We have a cluster of 32 nodes cluster, with a few topics. When we set 100 > partitions for each topic, the overall CPU usage is 30%, and when we > increase to 400 part

Re: Topic partitions randomly failed on live system

2015-09-21 Thread Chen Song
oker 1 as a leader were not working. ZK > thought that everything was ok and in sync. > > Restarting broker 1 fixed the broken topics for a bit, until broker 1 was > reassigned as leader of some topics, at which point it broke again. > Restarting broker 2 fixed it (). > > We're using kafka-2.10.0_0.8.2.0. Could anyone explain what happened, and > (most importantly) how we stop it happening again in the future? > > Many thanks, > SimonC > > -- Chen Song

Spark Streaming action not triggered with Kafka inputs

2015-01-23 Thread Chen Song
below: Batch Processing Statistics No statistics have been generated yet. Am I doing anything wrong on the parallel receiving part? -- Chen Song

Re: Spark Streaming action not triggered with Kafka inputs

2015-01-23 Thread Chen Song
Sorry this is meant to go to spark users. Ignore this thread. On Fri, Jan 23, 2015 at 2:25 PM, Chen Song chen.song...@gmail.com wrote: I am running into some problems with Spark Streaming when reading from Kafka.I used Spark 1.2.0 built on CDH5. The example is based on: https://github.com

Re: producer performance

2014-07-11 Thread Chen Song
, Chen Song chen.song...@gmail.com wrote: While testing kafka producer performance, I found 2 testing scripts. 1) performance testing script in kafka distribution bin/kafka-producer-perf-test.sh --broker-list localhost:9092 --messages 1000 --topic test --threads 10 --message-size 100