Hi Evan,

Since the config values need to be in memory, I don't think it is
possible to avoid a restart if you want to change the partition count.
Avoiding a restart would mean exposing some kind of limited "hup"
command in kafka brokers to reread the configs on demand, or to reread
the configs on a schedule - I'm not sure if it is worth the effort
since restarts are quick unless the broker does not shut down cleanly.
In any case, If you are using the high-level (zookeeper-based)
producers/consumers you can avoid "outages" by doing a rolling restart
of your brokers.

Thanks for pointing out the default partitioner issue - not sure if we
need a jira for it. Can one of the committers review this:

diff --git a/core/src/main/scala/kafka/producer/DefaultPartitioner.scala
b/core/src/main/scala/kafka/producer/DefaultPartitioner.scala
index e1fac32..3459224 100644
--- a/core/src/main/scala/kafka/producer/DefaultPartitioner.scala
+++ b/core/src/main/scala/kafka/producer/DefaultPartitioner.scala
@@ -24,6 +24,6 @@ private[kafka] class DefaultPartitioner[T] extends
Partitioner[T] {
     if(key == null)
       random.nextInt(numPartitions)
     else
-      key.hashCode % numPartitions
+      math.abs(key.hashCode) % numPartitions
   }
 }

Thanks,

Joel

On Thu, Sep 15, 2011 at 11:30 AM, Evan Chan <e...@ooyala.com> wrote:
> So currently if we want to bump the # of partitions, one needs to restart
> Kafka with a new server.properties file.
> Is there any way to make this dynamic?
>
> Let's say that my load has increased 2x recently, it would be nice not to
> have to restart Kafka.
>
>
> By the way, the default partitioner has a bug in it in 0.6, the hash
> function can return a negative value and throws an exception.
>
> --
> --
> *Evan Chan*
> Senior Software Engineer |
> e...@ooyala.com | (650) 996-4600
> www.ooyala.com | blog <http://www.ooyala.com/blog> |
> @ooyala<http://www.twitter.com/ooyala>
>

Reply via email to