On 21/12/2018 03.02, Gregory Farnum wrote:
> RBD snapshots are indeed crash-consistent. :)
> -Greg
Thanks for the confirmation! May I suggest putting this little nugget in
the docs somewhere? This might help clarify things for others :)
--
Hector Martin (hec...@marcansoft.com)
Public Key:
On Tue, Dec 18, 2018 at 1:11 AM Hector Martin wrote:
> Hi list,
>
> I'm running libvirt qemu guests on RBD, and currently taking backups by
> issuing a domfsfreeze, taking a snapshot, and then issuing a domfsthaw.
> This seems to be a common approach.
>
> This is safe, but it's impactful: the
On 18/12/2018 20:29, Oliver Freyermuth wrote:
Potentially, if granted arbitrary command execution by the guest agent, you
could check (there might be a better interface than parsing meminfo...):
cat /proc/meminfo | grep -i dirty
Dirty: 19476 kB
You could guess from that
Am 18.12.18 um 11:48 schrieb Hector Martin:
> On 18/12/2018 18:28, Oliver Freyermuth wrote:
>> We have yet to observe these hangs, we are running this with ~5 VMs with ~10
>> disks for about half a year now with daily snapshots. But all of these VMs
>> have very "low" I/O,
>> since we put
On 18/12/2018 18:28, Oliver Freyermuth wrote:
We have yet to observe these hangs, we are running this with ~5 VMs with ~10 disks for
about half a year now with daily snapshots. But all of these VMs have very
"low" I/O,
since we put anything I/O intensive on bare metal (but with automated
For what it worth, we are using snapshots on a daily basis for a couple
of thousands rbd volume for some times
So far so good, we have not catched any issue
On 12/18/2018 10:28 AM, Oliver Freyermuth wrote:
> Dear Hector,
>
> we are using the very same approach on CentOS 7 (freeze + thaw), but
>
Dear Hector,
we are using the very same approach on CentOS 7 (freeze + thaw), but preceeded
by an fstrim. With virtio-scsi, using fstrim propagates the discards from
within the VM to Ceph RBD (if qemu is configured accordingly),
and a lot of space is saved.
We have yet to observe these hangs,