Great! I'd love to see this move forward, especially if the design allows
for per-key conditionals sometime in the future – doesn't have to be in the
first iteration.
On Tue, Jul 14, 2015 at 5:26 AM Ben Kirwin b...@kirw.in wrote:
Ah, just saw this. I actually just submitted a patch this evening
Ah, just saw this. I actually just submitted a patch this evening --
just for the partitionwide version at the moment, since it turns out
to be pretty simple to implement. Still very interested in moving
forward with this stuff, though not always as much time as I would
like...
On Thu, Jul 9,
Ben, are you still interested in working on this?
On Mon, Jun 15, 2015 at 9:49 AM Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
I like to refer to it as conditional write or conditional request,
semantically similar to HTTP's If-Match header.
Ben: I'm adding a comment about per-key
I like to refer to it as conditional write or conditional request,
semantically similar to HTTP's If-Match header.
Ben: I'm adding a comment about per-key checking to your JIRA.
On Mon, Jun 15, 2015 at 4:06 AM Ben Kirwin b...@kirw.in wrote:
Yeah, it's definitely not a standard CAS, but it
Yeah, it's definitely not a standard CAS, but it feels like the right
fit for the commit log abstraction -- CAS on a 'current value' does
seem a bit too key-value-store-ish for Kafka to support natively.
I tried to avoid referring to the check-offset-before-publish
functionality as a CAS in the
@Jay:
Regarding your first proposal: wouldn't that mean that a producer wouldn't
know whether a write succeeded? In the case of event sourcing, a failed CAS
may require re-validating the input with the new state. Simply discarding
the write would be wrong.
As for the second idea: how would a
Ewen: would single-key CAS necessitate random reads? My idea was to have
the broker maintain an in-memory table that could be rebuilt from the log
or a snapshot.
On lør. 13. jun. 2015 at 20.26 Ewen Cheslack-Postava e...@confluent.io
wrote:
Jay - I think you need broker support if you want CAS to
Daniel: By random read, I meant not reading the data sequentially as is the
norm in Kafka, not necessarily a random disk seek. That in-memory data
structure is what enables the random read. You're either going to need the
disk seek if the data isn't in the fs cache or you're trading memory to
But wouldn't the key-offset table be enough to accept or reject a write?
I'm not familiar with the exact implementation of Kafka, so I may be wrong.
On lør. 13. jun. 2015 at 21.05 Ewen Cheslack-Postava e...@confluent.io
wrote:
Daniel: By random read, I meant not reading the data sequentially as
If you do CAS where you compare the offset of the current record for the
key, then yes. This might work fine for applications that track key, value,
and offset. It is not quite the same as doing a normal CAS.
On Sat, Jun 13, 2015 at 12:07 PM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
Jay - I think you need broker support if you want CAS to work with
compacted topics. With the approach you described you can't turn on
compaction since that would make it last-writer-wins, and using any
non-infinite retention policy would require some external process to
monitor keys that might
I do not think that Kafa fits here. You should better use another storage
for your events, and use kafka to propagate the events to your views/query.
Le sam. 13 juin 2015 à 21:36, Ewen Cheslack-Postava e...@confluent.io a
écrit :
If you do CAS where you compare the offset of the current record
On Sat, Jun 13, 2015 at 10:47 PM, Yann Simon yann.simon...@gmail.com
wrote:
I do not think that Kafa fits here. You should better use another storage
for your events, and use kafka to propagate the events to your views/query.
This is also how I understood a use case of Martin Kleppmann for
Ben: your solutions seems to focus on partition-wide CAS. Have you
considered per-key CAS? That would make the feature more useful in my
opinion, as you'd greatly reduce the contention.
On Fri, Jun 12, 2015 at 6:54 AM Gwen Shapira gshap...@cloudera.com wrote:
Hi Ben,
Thanks for creating the
I have been thinking a little about this. I don't think CAS actually
requires any particular broker support. Rather the two writers just write
messages with some deterministic check-and-set criteria and all the
replicas read from the log and check this criteria before applying the
write. This
Gwen: Right now I'm just looking for feedback -- but yes, if folks are
interested, I do plan to do that implementation work.
Daniel: Yes, that's exactly right. I haven't thought much about
per-key... it does sound useful, but the implementation seems a bit
more involved. Want to add it to the
Hi Ben,
Thanks for creating the ticket. Having check-and-set capability will be sweet :)
Are you planning to implement this yourself? Or is it just an idea for
the community?
Gwen
On Thu, Jun 11, 2015 at 8:01 PM, Ben Kirwin b...@kirw.in wrote:
As it happens, I submitted a ticket for this
As it happens, I submitted a ticket for this feature a couple days ago:
https://issues.apache.org/jira/browse/KAFKA-2260
Couldn't find any existing proposals for similar things, but it's
certainly possible they're out there...
On the other hand, I think you can solve your particular issue by
I've been working on an application which uses Event Sourcing, and I'd like
to use Kafka as opposed to, say, a SQL database to store events. This would
allow me to easily integrate other systems by having them read off the
Kafka topics.
I do have one concern, though: the consistency of the data
19 matches
Mail list logo