(source.log.readInto(readBuffer, position))
for (entry <- messages.shallowIterator) {
ByteBufferMessageSet.writeMessage(writeBuffer, entry.message,
entry.offset)
On Thu, Jan 21, 2016 at 3:13 PM, Chen Song <chen.song...@gmail.com> wrote:
> I am testing compact policy on a t
Any thoughts on this topic. Not sure if others have seen the same spike as
us.
On Tue, Nov 17, 2015 at 3:51 PM, Chen Song <chen.song...@gmail.com> wrote:
> BTW, we are running Kafka 0.8.2.2.
>
> On Tue, Nov 17, 2015 at 3:48 PM, Chen Song <chen.song...@gmail.com> wrote:
&
broker CPU usage. However,
this seems to be a limiting factor to parallelism (mostly for consumer
perspective) because all I test is just 400 partitions.
Not sure if this is being addressed in an way. Any thoughts on this?
--
Chen Song
ater
> >>> # parallelism for consumption, but this will also result in more files
> >>> across
> >>> # the brokers.
> >>> num.partitions=16
> >>>
> >>> # The minimum age of a log file to be eligible for deletion
> >>> log.retention.hours=1
> >>>
> >>> # The maximum size of a log segment file. When this size is reached a
> new
> >>> log segment will be created.
> >>> log.segment.bytes=536870912
> >>>
> >>> # The interval at which log segments are checked to see if they can be
> >>> deleted according
> >>> # to the retention policies
> >>> log.retention.check.interval.ms=6
> >>>
> >>> # By default the log cleaner is disabled and the log retention policy
> >> will
> >>> default to just delete segments after their retention expires.
> >>> # If log.cleaner.enable=true is set the cleaner will be enabled and
> >>> individual logs can then be marked for log compaction.
> >>> log.cleaner.enable=false
> >>>
> >>> # Timeout in ms for connecting to zookeeper
> >>> zookeeper.connection.timeout.ms=100
> >>>
> >>> auto.leader.rebalance.enable=true
> >>> controlled.shutdown.enable=true
> >>>
> >>>
> >>> Thanks in advance.
> >>>
> >>>
> >>>
> >>> Carles Sistare
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> http://khangaonkar.blogspot.com/
> >>
> >
>
>
--
Chen Song
BTW, we are running Kafka 0.8.2.2.
On Tue, Nov 17, 2015 at 3:48 PM, Chen Song <chen.song...@gmail.com> wrote:
> We have a cluster of 32 nodes cluster, with a few topics. When we set 100
> partitions for each topic, the overall CPU usage is 30%, and when we
> increase to 400 part
oker 1 as a leader were not working. ZK
> thought that everything was ok and in sync.
>
> Restarting broker 1 fixed the broken topics for a bit, until broker 1 was
> reassigned as leader of some topics, at which point it broke again.
> Restarting broker 2 fixed it ().
>
> We're using kafka-2.10.0_0.8.2.0. Could anyone explain what happened, and
> (most importantly) how we stop it happening again in the future?
>
> Many thanks,
> SimonC
>
>
--
Chen Song
below:
Batch Processing Statistics
No statistics have been generated yet.
Am I doing anything wrong on the parallel receiving part?
--
Chen Song
Sorry this is meant to go to spark users. Ignore this thread.
On Fri, Jan 23, 2015 at 2:25 PM, Chen Song chen.song...@gmail.com wrote:
I am running into some problems with Spark Streaming when reading from
Kafka.I used Spark 1.2.0 built on CDH5.
The example is based on:
https://github.com
, Chen Song chen.song...@gmail.com wrote:
While testing kafka producer performance, I found 2 testing scripts.
1) performance testing script in kafka distribution
bin/kafka-producer-perf-test.sh --broker-list localhost:9092 --messages
1000 --topic test --threads 10 --message-size 100