Yes -- thanks for this post.

I am new to Kafka, and I'd like clarification on one point. The
classes referenced by this post:

http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs/kafka/consumer/package.html
http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs/kafka/producer/package.html

are the canonical Scala classes for writing Producer and Consumer
clients, correct? I am comparing these docs to the example clients
(particularly the Python and C++ examples). It seems the example
clients simply hard-code values such as "Partition ID", whereas these
docs show the complete way to access such information.

By the way, it seems that if one has to hit Zookeeper every time
before sending a message to Kafka, throughput will take a hit. If one
wants a high-performance system, clients must "use [a] local copy of
the list of brokers and their number of partitions". Is this also
correct?

Thanks,

Philip

--
Philip O'Toole
Senior Developer
Loggly, Inc.
San Francisco, CA

On Wed, Aug 29, 2012 at 6:12 PM, Pankaj Gupta <pan...@brightroll.com> wrote:
> Hey Ming,
>
> Thanks for blogging. Kafka documentation is really good but it is always good 
> to see it  from another perspective.
>
> Pankaj
> On Aug 29, 2012, at 3:57 PM, Ming Han wrote:
>
>> I wrote a blog post about some of Kafka internals, if anyone is interested:
>> http://hanworks.blogspot.com/2012/08/down-rabbit-hole-with-kafka.html
>>
>> Thanks,
>> Ming Han
>

Reply via email to