Hi,
I can understand that reading from Kafka must be faster as the messages are
not deleted once they are read. But writing to Kafka would take the same
effort as writing to MQ. Is my understanding correct?
-Yash
Thanks for catching this Odin! Now I check the release process again, I
realized that I should have used release.py instead of doing every step
manually. So a few steps were missed. Most notably the kafka_2.11-1.1.1.tgz
generated for RC1 were compiled with Java 8 instead of Java 7. Thus a new
relea
Hello Kafka users, developers and client-developers,
This is the second candidate for release of Apache Kafka 1.1.1.
Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
that release. A few of the mor
Hello Sam,
Since 1.1.0 we do have considerations about load balancing between
sub-topologies, as summarized in this PR:
https://github.com/apache/kafka/pull/4624
Note that generally we want to optimize our task assignor further to be
state-store aware (https://issues.apache.org/jira/browse/KAFKA-
Thanks.. It helped.. I deleted a few topics data log..
On Thu, Jun 28, 2018 at 4:51 PM, Zakee wrote:
> It depends.
>
> You can clean up data folder before starting the broker as long as you
> have data replicated in other healthy brokers. When you start the broker
> with clean data folder, it wi
"Finally Kafka leans heavily on the OS pagecache for data storage. Although the
question says that kafka writes to disk immediately, that is not completely
true. Actually Kafka just writes to the filesystem immediately, which is really
just writing to the kernel's memory pool which is asynchrono
It depends.
You can clean up data folder before starting the broker as long as you have
data replicated in other healthy brokers. When you start the broker with clean
data folder, it will start catching up with replica leaders and eventually join
in-sync replicas. The catchup traffic impact th
Hello kafka users,
How to recover a kafka broker from disk full ?
I updated the log retention period to 1 hour from 7 days.. but this would
take effect only when broker is started.
Any ways other than increasing the disk space?
Thanks,
Vignesh
So say if there is only one consumer in a consumer group ( to make order
guarantee scenario ) and it reads from only one of the partition at a time (
say the topic he is subscribing to is partitioned into 3 partitions ) , the
only use of putting data into other two partitions is :
1. Other co
Please correct me if I’m wrong, but I’m under the impression that the task_id
in streams metrics is formatted as _, and
topicGroupId corresponds to a particular subtopology in the streams topology. I
assume that’s true for the rest of this message.
I have a streams app with multiple sub topolog
In your case, you need to restart B2 with unclean.leader.election=true.
This will enable B2 to become leader with 90 messages.
On Thu, Jun 28, 2018 at 11:51 PM Jordan Pilat wrote:
> If I restart the broker, won't that cause all 100 messages to be lost?
>
> On 2018/06/28 02:59:15, Manikumar wrot
If I restart the broker, won't that cause all 100 messages to be lost?
On 2018/06/28 02:59:15, Manikumar wrote:
> You can enable unclean.leader.election temporarily for specific topic by
> using kafka-topics.sh command.
> This requires broker restart to take effect.
>
> http://kafka.apache.org
Yes, that's how Kafka works - all partitions are read in parallel, but only one
consumer form the same consumer group reads a partition at one time (a consumer
may consume multiple partitions, but no two consumers from same group consume
the same partition)
Virgil.
On 6/28/18, 7:45 PM, "Malik
You mean we use multiple partitions for a topic say topic is event "Customer
Account Registration" , we can have multiple consumers read from different
partition at a time ( only one consumer per partition ) and can perform
registration of different customers in parallel ?
-Original Messag
Hi all,
Kafka is not sending messages to consumer though the consumer is still active
and subscribed to the partition. Please help us to know why this behavior is
happening in kafka.
Infrastructure:
Kafka is running in cluster mode with 3 brokers and 3 zookeeper instances.
Kafka broker is runn
Does the container system used in your Rancher environment have persistence
configured for the brokers? Are you are using ephemeral storage?
On Thu, Jun 28, 2018 at 7:39 AM Karthick Kumar wrote:
> Hi,
>
> I'm using Kafka cluster on three different servers, Recently my servers
> went down when I
Hi,
I'm using Kafka cluster on three different servers, Recently my servers
went down when I start the server and then start the services, One of the
Zookeeper is not connected to the Kafka cluster and it stayed there for two
days...
So I killed the stack in Rancher and then start the new one all
I'm running a Kafka cluster on 3 EC2 instances. Each instance runs kafka
(0.11.0.1) and zookeeper (3.4). My topics are configured so that each has
20 partitions and ReplicationFactor of 3.
Today I noticed that some partitions refuse to sync to all three nodes.
Here's an example:
bin/kafka-topics.
Yes, looks like maven artifacts are missing on staging repo
https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.11/
On Thu, Jun 28, 2018 at 4:18 PM Odin wrote:
> There are no 1.1.1-rc1 artifacts in the staging repo listed. Where can
> they be found?
>
> Sincerely
> Odin
There are no 1.1.1-rc1 artifacts in the staging repo listed. Where can they be
found?
Sincerely
Odin Standal
‐‐‐ Original Message ‐‐‐
On June 22, 2018 7:09 PM, Dong Lin wrote:
>
>
> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for rel
Messages within the same partition are ordered. You don't need to use only one
partition (unless you need global ordering) - you just need to use keys. E.g.
if your key is the account number, then all operations done on the same account
are ordered; if your key is the customer ID, all operations
22 matches
Mail list logo