Jun:
Hi. I find why the error appear. In high cocurrent environment, the tcp
server will drop some package when the tcp buffer is over. So there are
some chances that "topic" contains one or more characters that encode to
bytes that include NULL (0).
I have submit the patch to kafka-411, pls c
Jian,
Thanks for the patch. It may not be the right fix though since it fixes
the symptom, but not the cause. For each produce request, the broker does
the following: (1) read all bytes of the request into
a BoundedByteBufferReceive (SocketServer.read); (2) after all bytes of the
request are ready
Hello,
I am evaluating kafka to transfer log events between two datacenters. There is
a ~200ms latency between the two clusters.
Is there some people dealing with such latencies, how do you tweak kafka &
system to increase bandwith use ?
I can think of sysctl, increase the number of consumers u
Checking out the audit cdoe, the patch in KAFKA-260 doesn't apply for me,
there is a problem
in core/src/main/scala/kafka/consumer/ConsumerIterator.scala.
I am working with the branch 0.7.1.
The section now looks like:
val item = localCurrent.next()
consumedOffset = item.offset
new M
nevermind.
On Mon, Jul 30, 2012 at 4:25 PM, Jonathan Creasy wrote:
> Checking out the audit cdoe, the patch in KAFKA-260 doesn't apply for me,
> there is a problem
> in core/src/main/scala/kafka/consumer/ConsumerIterator.scala.
>
> I am working with the branch 0.7.1.
>
> The section now looks li