Could one use ZFS or BTRFS snapshot functionality for this?

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On Tue, Jan 20, 2015 at 1:39 AM, Gwen Shapira <gshap...@cloudera.com> wrote:

> Hi,
>
> As a former DBA, I hear you on backups :)
>
> Technically, you could copy all log.dir files somewhere safe
> occasionally. I'm pretty sure we don't guarantee the consistency or
> safety of this copy. You could find yourself with a corrupt "backup"
> by copying files that are either in the middle of getting written or
> are inconsistent in time with other files. Kafka doesn't have a good
> way to stop writing to files for long enough to allow copying them
> safely.
>
> Unlike traditional backups, there's no transaction log that can be
> rolled to move a disk copy forward in time (or that can be used when
> data files are locked for backups). In Kafka, the files *are* the
> transaction log and you roll back in time by deciding which offsets to
> read.
>
> DR is possible using MirrorMaker though, since the only thing better
> than replication is... more replication!
> So you could create a non-corrupt file copy by stopping a MirrorMaker
> replica occasionally and copying all files somewhere safe.
>
> If it helps you sleep better at night :)
> Typically having kafka nodes on multiple racks and a DR in another
> data center is considered pretty safe.
>
> Gwen
>
> On Wed, Jan 14, 2015 at 9:22 AM, Gene Robichaux
> <gene.robich...@match.com> wrote:
> > Does anyone have any thoughts on Kafka broker backups?
> >
> > All of our topics have a replication factor of 3. However I just want to
> know if anyone does anything about traditional backups. My background is
> Ops DBA, so I have a special place in my heart for backups.
> >
> >
> > Gene Robichaux
> > Manager, Database Operations
> > Match.com
> > 8300 Douglas Avenue I Suite 800 I Dallas, TX  75225
> >
>

Reply via email to