You should be fine if you have enough memory.
-Original Message-
From: Mayur Mohite [mailto:mayur.moh...@applift.com]
Sent: Friday, January 22, 2016 12:50 PM
To: users@kafka.apache.org
Subject: Larger socket buffer size configuration for kafka brokers on high
bandwidth and high delay net
Hi,
We have a 1 Gbits per sec bandwidth network with 200ms latency between
producer and kafka brokers. Currently, the throughput of producer
(synchronous) is very low (1 message in 1.3 sec, in single message we send
100 lines).
We tried tweaking the tcp window sizes ( receive and send window incre
Providing a non-Zero value (Zero is default), for
public ConsumerRecords poll(long timeout)
Works fine for me with no gaps,but said so is just a work around.
Consumer is definitely picking up messages with some delay.
-Sam
> On 22-Jan-2016, at 11:54 am, Jason Gustafson wrote:
>
>
Hi Krzysztof,
This is definitely weird. I see the data in the broker's send queue, but
there's a delay of 5 seconds before it's sent to the client. Can you create
a JIRA?
Thanks,
Jason
On Thu, Jan 21, 2016 at 11:30 AM, Samya Maiti
wrote:
> +1, facing same issue.
> -Sam
> > On 22-Jan-2016, at
Hi Robert!
Jason is the expert, and I hope he'll respond soon.
Meanwhile: I think that you can do what you are trying to do by:
1. call position() to get the current position you are consuming
2. call seekToEnd() and then position(), which will give you the last
position at the point in which you
Thank you.
On Fri, 22 Jan 2016 at 08:39 Guozhang Wang wrote:
> Done.
>
> On Thu, Jan 21, 2016 at 12:38 AM, tao xiao wrote:
>
> > Hi Guozhang,
> >
> > Thanks for that.
> >
> > Can you please grant kevinth the write access too? He is my colleague and
> > both of us work on this topic now.
> >
> >
I'm reading new client design of version 0.9. and I has a question of
inFlightRequests in and out.
Here is the basic flow :
When Sender send a ClientRequest to NetworkClient, it add to
inFlightRequests indicator in-flight requests
```
private void doSend(ClientRequest request, long now) {
Done.
On Thu, Jan 21, 2016 at 12:38 AM, tao xiao wrote:
> Hi Guozhang,
>
> Thanks for that.
>
> Can you please grant kevinth the write access too? He is my colleague and
> both of us work on this topic now.
>
> On Wed, 20 Jan 2016 at 14:55 Guozhang Wang wrote:
>
> > Tao,
> >
> > I have granted
Thanks Jim. I like the fact that the offset management will not require us to
customize kafka. I will think more on this. maybe a time based seek will just
work...i think the math you proposed require partition setup should be exactly
the same as the original and partitioner should map the messa
Thanks for the answer Jason.
Has the consumer any way to rejoin the Consumer Group?
2016-01-19 18:27 GMT+01:00 Jason Gustafson :
> Hey Franco,
>
> This time I'll answer briefly ;)
>
> 1) Heartbeats also get invoked when you call another blocking operation
> such as commitSync().
> 2) If all cons
Hi,
The new consumer is single threaded. You can layer multi-threaded
processing on top of it, but you'll definitely need to be careful about how
offset commits are handled to ensure a) processing of a message is actually
*complete* not just passed off to another thread before committing an
offset
For the offset, at the start of topic (and perhaps periodically in the
topic), the script could make a note of the corresponding offset in the
previous topic. The consumer could then see the correspondence between
the current topic offsets and the previous topic offsets and do some math
to get to
I am testing compact policy on a topic and got the following exception in
log-cleaner.log. It seems to be related to the size of the ByteBuffer. Has
anyone seen this error before, or any config I can tune to increase this?
[2016-01-21 04:21:23,083] INFO Cleaner 0: Beginning cleaning of log
topic-
+1, facing same issue.
-Sam
> On 22-Jan-2016, at 12:16 am, Krzysztof Ciesielski
> wrote:
>
> Hello, I'm running into an issue with the new consumer in Kafka 0.9.0.0.
> Here's a runnable gist illustrating the problem:
> https://gist.github.com/kciesielski/054bb4359a318aa17561 (requires Kafka on
>
Hello, I'm running into an issue with the new consumer in Kafka 0.9.0.0.
Here's a runnable gist illustrating the problem:
https://gist.github.com/kciesielski/054bb4359a318aa17561 (requires Kafka on
localhost:9092)
Scenario description:
First, a producer writes 50 elements into a topic
Then, a
Thanks Dave and Joel. I created a PR to add this note to the Upgrade Notes:
https://github.com/apache/kafka/pull/798
Please take a look.
Ismael
On Thu, Jan 21, 2016 at 7:43 AM, Joel Koshy wrote:
> Hi Dave,
>
> This change was introduced in
> https://issues.apache.org/jira/browse/KAFKA-1755 fo
Hi Guozhang,
Thanks for that.
Can you please grant kevinth the write access too? He is my colleague and
both of us work on this topic now.
On Wed, 20 Jan 2016 at 14:55 Guozhang Wang wrote:
> Tao,
>
> I have granted you the access.
>
> Guozhang
>
>
> On Tue, Jan 19, 2016 at 7:56 PM, Connie Yang
17 matches
Mail list logo