maybe there some changes in 0.9.0.0;
but still you can try increase producer sending rate, and see if there are
message lost but no exception;
note that, to increase the producer sending rate, you must have enough producer
'power';
in my case, I have 50 producer sending message at the
Hi Jinxing
I don't think we can resolve this issue by increasing producers. if I
increased more producers, it should lost more messages.
I just test two producers.
Thread Producer 1 has 83064 messages in producer side and 82273 messages in
consumer side
Thread Producer 2 has 89844 messages in
there is a flush api of the producer, you can call this to prevent messages
lost;
maybe it can help;
At 2015-11-12 16:43:54, "Hawin Jiang" wrote:
>Hi Jinxing
>
>I don't think we can resolve this issue by increasing producers. if I
>increased more producers, it
If you have a kafka partition that is replicated to 3 nodes the partition
varies (in time) thus making the colocation pointless. You can only produce
and consume to/from the leader.
/svante
2015-11-12 9:00 GMT+01:00 Young, Ben :
> Hi,
>
> Any thoughts on this? Perhaps
in kafka_0.8.3.0:
kafkaProducer = new KafkaProducer<>(properties, new ByteArraySerializer(),
new ByteArraySerializer());
kafkaProducer.flush();
you can call the flush after sending every few messages;
At 2015-11-12 17:36:24, "Hawin Jiang" wrote:
>Hi Prabhjot
>
>The
Hi,
Just to confirm that the number of messages produced are what you are
seeing, What does GetOffsetShell report for this topic ?
Regards,
Prabhjot
On Thu, Nov 12, 2015 at 2:13 PM, Hawin Jiang wrote:
> Hi Jinxing
>
> I don't think we can resolve this issue by
Hi Prabhjot
The messages are "Thread1_kafka_1" and "Thread2_kafka_1". Something like
that.
For GetOffsetShell report below:
[kafka@dn-01 bin]$ ./kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list dn-01:9092 --time -1 --topic kafka-test
kafka-test:0:12529261
@Jinxing
Can you share
Hi Jinxing
I am using kafka_2.10-0.9.0.0-SNAPSHOT. I have downloaded source code and
installed it last week.
I saw 97446 messages have been sent to kafka successfully.
So far, I have not found any failed messages.
Best regards
Hawin
On Thu, Nov 12, 2015 at 12:07 AM, jinxing
Hi,
Any thoughts on this? Perhaps Kafka is not the best way to go for this, but the
docs do mention transaction/replication logs as a use case, and I'd have
thought locality would have been important for that?
Thanks,
Ben
-Original Message-
From: Young, Ben
Hi all,
I am new to Kafka usage. Here are some questions that I have in mind.
Kindly help me understand it better. If some questions make no sense feel free
to call it out.
1. Is it possible to prune log offsets (messages)older than certain date in a
partition?
2. Will Kafka delete a
Yes, though it's still awaiting some updates after some renaming and API
modifications that happened in Kafka recently.
-Ewen
On Thu, Nov 12, 2015 at 9:10 AM, Venkatesh Rudraraju <
venkatengineer...@gmail.com> wrote:
> Ewen,
>
> How do I use a HDFSSinkConnector. I see the sink as part of a
Hi Everyone,
We are using kafka 0.8.2.1 and we noticed that kafka/zookeeper-client
were not able to gracefully handle a non existing zookeeper instance.
This caused one of our brokers to get stuck during a shutdown and that
seemed to impact the partitions for which the broker was a leader
Thanks for a quick response.
On 12-Nov-2015 16:28, "Gerard Klijs" wrote:
> Hi Hemanth, it was introduced on 04/04/2015 at github, so after the 0.8.2.0
> version, it will be part of the 0.9.0.0 release.
>
> On Thu, Nov 12, 2015 at 8:39 AM Hemanth Yamijala
Yes.
1) Start kafka with JMX_PORT, like this:
JMX_PORT=9997 bin/kafka-server-start.sh config/server-1.properties &
2) Create a new item in zabbix by setting type *JMX agent*, and key like*
jmx["kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=sshd_in","OneMinuteRate"]*
See answers inline
On Thu, Nov 12, 2015 at 2:53 PM, Sandhu, Dilpreet
wrote:
> Hi all,
>I am new to Kafka usage. Here are some questions that I have in
> mind. Kindly help me understand it better. If some questions make no sense
> feel free to call it out.
> 1. Is
Thanks Gwen for your excellent slides
I will test it again based on your suggestions.
Best regards
Hawin
On Thu, Nov 12, 2015 at 6:35 PM, Gwen Shapira wrote:
> Hi,
>
> First, here's a handy slide-deck on avoiding data loss in Kafka:
>
>
The new consumer (0.9.0) will not be compatible with older brokers (0.8.2).
In general you should upgrade brokers before upgrading clients. The old
clients (0.8.2) will work on the new brokers (0.9.0).
Thanks,
Grant
On Thu, Nov 12, 2015 at 7:52 AM, Han JU wrote:
>
Ok thanks for your confirmation!
2015-11-12 15:19 GMT+01:00 Grant Henke :
> The new consumer (0.9.0) will not be compatible with older brokers (0.8.2).
> In general you should upgrade brokers before upgrading clients. The old
> clients (0.8.2) will work on the new brokers
Hi,
First, here's a handy slide-deck on avoiding data loss in Kafka:
http://www.slideshare.net/gwenshap/kafka-reliability-when-it-absolutely-positively-has-to-be-there
Note configuration parameters like the number of retries.
Also, it looks like you are sending data to Kafka asynchronously, but
i have 3 brokers;
the ack configuration is -1(all), meaning a message is sent successfully only
after getting every broker's ack;
is this a bug?
At 2015-11-12 21:08:49, "Pradeep Gollakota" wrote:
>What is your producer configuration? Specifically, how many acks are
Hi All:
I am trying to set up an Eclipse environment to examine Kafka source code, run
unit tests, etc. Using Eclipse Mars, Scala IDE plugin 4.2
I followed the steps outlined in
https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setup
Hi Pradeep
Here is my configuration
# Producer Basics #
# list of brokers used for bootstrapping knowledge about the rest of the
cluster
# format: host1:port1,host2:port2 ...
metadata.broker.list=localhost:9092
# name of the partitioner
22 matches
Mail list logo