e that isn't already suited to the other thread. My 2c, since I follow
> this list for Kafka information.
>
> On Thu, Dec 7, 2017 at 6:41 PM, Khurrum Nasim <khurrumnas...@gmail.com>
> wrote:
>
> > Hi Kafka Community,
> >
> > Has anyone taken a look at this blog po
Hi Kafka Community,
Has anyone taken a look at this blog post, comparing pulsar and kafka from
architectural view? I am wondering how you guys think about segment centric
vs partition centric.
https://streaml.io/blog/pulsar-segment-based-architecture/
- KN
mpeting
> projects solving all the same use cases. We don't need to try to cram
> Pulsar features into Kafka if it's not a good fit and vice versa. At
> the
> same time, where capabilities do overlap, we can try to learn from
> their
> experience and they ca
e
> same time, where capabilities do overlap, we can try to learn from their
> experience and they can learn from ours. The example of message retention
> seemed like one of these instances since there are legitimate use cases and
> Pulsar's approach has some benefits.
>
sure. make sense t
ack the progress of the group through its committed offsets. It
> > might be possible to extend it to automatically delete records in a topic
> > after offsets are committed if the topic is known to be exclusively owned
> > by the consumer group. We already have the DeleteRecords AP
t; a matter of some additional topic metadata. I'd be
> interested to hear whether this kind of use case is common among our users.
>
> -Jason
>
> On Sun, Dec 3, 2017 at 10:29 PM, Khurrum Nasim <khurrumnas...@gmail.com>
> wrote:
>
> > Dear Kafka Community,
> >
>
Dear Kafka Community,
I happened to read this blog post comparing the messaging model between
Apache Pulsar and Apache Kafka. It sounds interesting. Apache Pulsar claims
to unify streaming (kafka) and queuing (rabbitmq) in one unified API.
Pulsar also seems to support Kafka API. Have anyone taken
On Tue, Aug 23, 2016 at 9:00 AM, Bryan Baugher wrote:
>
> > Hi everyone,
> >
> > Yesterday we had lots of network failures running our Kafka cluster
> > (0.9.0.1 ~40 nodes). We run everything using the higher durability
> settings
> > in order to avoid in data loss, producers