This would be of value to me, as well. I'm currently not sure how to avoid
having users of ruby-kafka produce messages that exceed that limit when
using an async producer loop – I'd prefer to not allow such a message into
the buffers at all rather than having to deal with it only when there's a
an offset for a
message that has expired.
Do you have any experience in building a system such as this?
Daniel Schierbeck
Senior Staff Engineer @ Zendesk
Have you looked into using a relational database as the primary store, with
something like Maxwell or Bottled Water as a broadcast mechanism?
On Mon, 28 Mar 2016 at 17:28 Daniel Schierbeck <da...@zendesk.com> wrote:
> I ended up abandoning the use of Kafka as a primary event store, for
I ended up abandoning the use of Kafka as a primary event store, for
several reasons. One is the partition granularity issue; another is the
lack of a way to guarantee exclusive write access, i.e. ensure that only a
single process can commit an event for an aggregate at any one time.
On Mon, 28
Since Kafka itself has replication, I'm not sure what HDFS backups would
bring – how would you recover from e.g. all Kafka nodes blowing up if you
only have an HDFS backup? Why not use MirrorMaker to replicate the cluster
to a remote DC, with a process of reversing the direction in case you need
-bb72-43f2-8848-9e09d0dcb...@kafka.guru%3E>>.
> So in theory one
> could grow a single partition to terabytes-scale. But don’t take my word
> for it, as I have not tried it.
>
> Cheers, Giidox
>
>
>
> > On 09 Mar 2016, at 15:10, Daniel Schierbeck <da...@zendesk.com
. I'd also
like to know what sort of problems Kafka would pose for long-term storage –
would I need special storage nodes, or would replication be sufficient to
ensure durability?
Daniel Schierbeck
Senior Staff Engineer, Zendesk
I'm also very interested in using Kafka as a persistent, distributed commit
log – essentially the write side of a distributed database, with the read
side being an array of various query stores (Elasticsearch, Redis,
whatever) and stream processing systems.
The benefit of retaining data in Kafka
g the Java clients, but many more are also using
> non-Java clients and it's great to see more clients supported across many
> languages. Even more compelling to see libraries deployed in production!
>
> -Ewen
>
>
> On Wed, Feb 3, 2016 at 7:11 AM, Daniel Schierbeck
> <da
at Zendesk, handling about
1,000 messages/s across our data centers.
Best regards,
Daniel Schierbeck
s the message.
Does anyone have experience with this, or do you just let the Kafka topic
delete old messages? I'd much prefer keeping the data in Kafka forever, as
it's ideally suited for bootstrapping new systems, e.g. search indexes,
analytics, etc.
Best regards,
Daniel Schierbeck
--
just for the partitionwide version at the moment, since it turns out
to be pretty simple to implement. Still very interested in moving
forward with this stuff, though not always as much time as I would
like...
On Thu, Jul 9, 2015 at 9:39 AM, Daniel Schierbeck
daniel.schierb...@gmail.com
Would it be possible to document how to configure Kafka to never delete
messages in a topic? It took a good while to figure this out, and I see it
as an important use case for Kafka.
On Sun, Jul 12, 2015 at 3:02 PM Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
On 10. jul. 2015
.
Daniel Schierbeck
On 13. jul. 2015, at 15.41, Scott Thibault scott.thiba...@multiscalehn.com
wrote:
We've tried to use Kafka not as a persistent store, but as a long-term
archival store. An outstanding issue we've had with that is that the
broker holds on to an open file handle on every
On 10. jul. 2015, at 23.03, Jay Kreps j...@confluent.io wrote:
If I recall correctly, setting log.retention.ms and log.retention.bytes to
-1 disables both.
Thanks!
On Fri, Jul 10, 2015 at 1:55 PM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
On 10. jul. 2015, at 15.16
:
http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka-part-2/
On Fri, Jul 10, 2015 at 3:46 AM, Daniel Schierbeck
daniel.schierb...@gmail.com (mailto:daniel.schierb...@gmail.com)
wrote:
I'd like to use Kafka as a persistent store – sort of as an
alternative to
HDFS
I'd like to use Kafka as a persistent store – sort of as an alternative to
HDFS. The idea is that I'd load the data into various other systems in
order to solve specific needs such as full-text search, analytics, indexing
by various attributes, etc. I'd like to keep a single source of truth,
, 2015 at 3:46 AM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
I'd like to use Kafka as a persistent store – sort of as an alternative to
HDFS. The idea is that I'd load the data into various other systems in
order to solve specific needs such as full-text search, analytics, indexing
Ben, are you still interested in working on this?
On Mon, Jun 15, 2015 at 9:49 AM Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
I like to refer to it as conditional write or conditional request,
semantically similar to HTTP's If-Match header.
Ben: I'm adding a comment about per-key
. It is not quite the same as doing a normal CAS.
On Sat, Jun 13, 2015 at 12:07 PM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
But wouldn't the key-offset table be enough to accept or reject a
write?
I'm not familiar with the exact implementation of Kafka, so I may be
wrong
be
responsible for finalizing one booking, and notifying the other
client
that their request had failed. (In-browser or by email.)
On Wed, Jun 10, 2015 at 5:04 AM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
I've been working on an application which uses Event Sourcing
.
-Ewen
On Sat, Jun 13, 2015 at 9:59 AM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
@Jay:
Regarding your first proposal: wouldn't that mean that a producer
wouldn't
know whether a write succeeded? In the case of event sourcing, a failed
CAS
may require re-validating the input
harder for the OS, etc.).
On Sat, Jun 13, 2015 at 11:33 AM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
Ewen: would single-key CAS necessitate random reads? My idea was to have
the broker maintain an in-memory table that could be rebuilt from the log
or a snapshot.
On lør. 13
. (In-browser or by email.)
On Wed, Jun 10, 2015 at 5:04 AM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
I've been working on an application which uses Event Sourcing, and I'd
like
to use Kafka as opposed to, say, a SQL database to store events. This
would
allow me to easily
I've been working on an application which uses Event Sourcing, and I'd like
to use Kafka as opposed to, say, a SQL database to store events. This would
allow me to easily integrate other systems by having them read off the
Kafka topics.
I do have one concern, though: the consistency of the data
25 matches
Mail list logo