Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Sage Weil
On Wed, 4 Feb 2015, Cristian Falcas wrote:
 Hi,
 
 We have an openstack installation that uses ceph as the storage backend.
 
 We use mainly snapshot and boot from snapshot from an original
 instance with a 200gb disk. Something like this:
 1. import original image
 2. make volume from image (those 2 steps were done only once, when we
 installed openstack)
 3. boot main instance from volume, update the db inside
 4. snapshot the instance
 5. make volumes from previous snapshot
 6. boot test instances from those volumes (the last 3 steps take less then 
 30s)
 
 
 Currently the fs is btrfs and we are in love with the solution: the
 snapshots are instant and boot from snapshot is also instant. It cut
 our tests time (compared with the vmware solution + netap storage)
 from 12h to 2h. With vmware we were spending 10h with what now is done
 in a few seconds.

That's great to hear!

 I was wondering if the fs matters in this case, because we are a
 little worry about using btrfs and reading all the horror story here
 and on btrfs mailing list.
 
 Is the snapshoting performed by ceph or by the fs? Can we switch to
 xfs and have the same capabilities: instant snapshot + instant boot
 from snapshot?

The feature set and capabilities are identical.  The difference is that on 
btrfs we are letting btrfs do the efficient copy-on-write cloning when we 
touch a snapshotted object while with XFS we literally copy the object 
file (usually 4MB) on the first write.  You will likely see some penalty 
in the boot-from-clone scenario, although I have no idea how significant 
it will be.  On the other hand, we've also seen that btrfs fragmentation 
over time can lead to poor performance relative to XFS.

So, no clear answer, really.  Sorry!

If you do stick with btrfs, please report back here and share what you see 
as far as stability (along with the kernel version(s) you are using; most 
of the XFS over btrfs usage is based on FUD (in the literal sense) and I 
don't think we have seen much in the way of real user reports here in a 
while.

Thanks!
sage

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Cristian Falcas
Thank you for the clarifications.

We will try to report back, but I'm not sure our use case is relevant.
We are trying to use every dirty trick to speed up the VMs.

We have only 1 replica, and 2 pools.

One pool with journal on disk, where the original instance exists (we
want to keep this one safe).

The second pool is for the tests machines and has the journal in ram,
so this part is very volatile. We don't really care, because if the
worst happens and we have a power loss we just redo the pool and start
new instances. Journal in ram did wonders for us in terms of
read/write speed.



On Wed, Feb 4, 2015 at 11:22 PM, Sage Weil s...@newdream.net wrote:
 On Wed, 4 Feb 2015, Cristian Falcas wrote:
 Hi,

 We have an openstack installation that uses ceph as the storage backend.

 We use mainly snapshot and boot from snapshot from an original
 instance with a 200gb disk. Something like this:
 1. import original image
 2. make volume from image (those 2 steps were done only once, when we
 installed openstack)
 3. boot main instance from volume, update the db inside
 4. snapshot the instance
 5. make volumes from previous snapshot
 6. boot test instances from those volumes (the last 3 steps take less then 
 30s)


 Currently the fs is btrfs and we are in love with the solution: the
 snapshots are instant and boot from snapshot is also instant. It cut
 our tests time (compared with the vmware solution + netap storage)
 from 12h to 2h. With vmware we were spending 10h with what now is done
 in a few seconds.

 That's great to hear!

 I was wondering if the fs matters in this case, because we are a
 little worry about using btrfs and reading all the horror story here
 and on btrfs mailing list.

 Is the snapshoting performed by ceph or by the fs? Can we switch to
 xfs and have the same capabilities: instant snapshot + instant boot
 from snapshot?

 The feature set and capabilities are identical.  The difference is that on
 btrfs we are letting btrfs do the efficient copy-on-write cloning when we
 touch a snapshotted object while with XFS we literally copy the object
 file (usually 4MB) on the first write.  You will likely see some penalty
 in the boot-from-clone scenario, although I have no idea how significant
 it will be.  On the other hand, we've also seen that btrfs fragmentation
 over time can lead to poor performance relative to XFS.

 So, no clear answer, really.  Sorry!

 If you do stick with btrfs, please report back here and share what you see
 as far as stability (along with the kernel version(s) you are using; most
 of the XFS over btrfs usage is based on FUD (in the literal sense) and I
 don't think we have seen much in the way of real user reports here in a
 while.

 Thanks!
 sage

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Cristian Falcas
Hi,

We have an openstack installation that uses ceph as the storage backend.

We use mainly snapshot and boot from snapshot from an original
instance with a 200gb disk. Something like this:
1. import original image
2. make volume from image (those 2 steps were done only once, when we
installed openstack)
3. boot main instance from volume, update the db inside
4. snapshot the instance
5. make volumes from previous snapshot
6. boot test instances from those volumes (the last 3 steps take less then 30s)


Currently the fs is btrfs and we are in love with the solution: the
snapshots are instant and boot from snapshot is also instant. It cut
our tests time (compared with the vmware solution + netap storage)
from 12h to 2h. With vmware we were spending 10h with what now is done
in a few seconds.

I was wondering if the fs matters in this case, because we are a
little worry about using btrfs and reading all the horror story here
and on btrfs mailing list.

Is the snapshoting performed by ceph or by the fs? Can we switch to
xfs and have the same capabilities: instant snapshot + instant boot
from snapshot?

Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Daniel Schwager
Hi Cristian,


 We will try to report back, but I'm not sure our use case is relevant.
 We are trying to use every dirty trick to speed up the VMs.

we have the same use-case.

 The second pool is for the tests machines and has the journal in ram,
 so this part is very volatile. We don't really care, because if the
 worst happens and we have a power loss we just redo the pool and start
 new instances. Journal in ram did wonders for us in terms of
 read/write speed.

How do you handle a reboot of a node managing your pool having the journals in 
RAM?
All the mon's knows about the volatile pool - do you have remove  recreate the
pool automatically after rebooting this node?

Did you tried to enable rdb-caching? Is there a write-performance benefit using
journal @RAM instead of enable rbd-caching on client (openstack) side ?
I thought with rbd-caching the write performance should be fast enough.

regards
Danny


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Cristian Falcas
We want to use this script as a service for start/stop (but it wasn't
tested yet):

#!/bin/bash
# chkconfig: - 50 90
# description: make a journal for osd.0 in ram
start () {
 -f /dev/shm/osd.0.journal || ceph-osd -i 0 --mkjournal
}
stop ()  {
 service ceph stop osd.0  ceph-osd -i osd.0 --flush-journal  rm -f
/dev/shm/osd.0.journal
}
case \$1 in
  start) start;;
  stop)  stop;;
esac

Also, we didn't see any noticeable  improvements with rbd-caching, but
we didn't performed any tests to measure it, just how we feel it.



On Thu, Feb 5, 2015 at 12:09 AM, Daniel Schwager
daniel.schwa...@dtnet.de wrote:
 Hi Cristian,


 We will try to report back, but I'm not sure our use case is relevant.
 We are trying to use every dirty trick to speed up the VMs.

 we have the same use-case.

 The second pool is for the tests machines and has the journal in ram,
 so this part is very volatile. We don't really care, because if the
 worst happens and we have a power loss we just redo the pool and start
 new instances. Journal in ram did wonders for us in terms of
 read/write speed.

 How do you handle a reboot of a node managing your pool having the journals 
 in RAM?
 All the mon's knows about the volatile pool - do you have remove  recreate 
 the
 pool automatically after rebooting this node?

 Did you tried to enable rdb-caching? Is there a write-performance benefit 
 using
 journal @RAM instead of enable rbd-caching on client (openstack) side ?
 I thought with rbd-caching the write performance should be fast enough.

 regards
 Danny
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Lindsay Mathieson
On 5 February 2015 at 07:22, Sage Weil s...@newdream.net wrote:

  Is the snapshoting performed by ceph or by the fs? Can we switch to
  xfs and have the same capabilities: instant snapshot + instant boot
  from snapshot?

 The feature set and capabilities are identical.  The difference is that on
 btrfs we are letting btrfs do the efficient copy-on-write cloning when we
 touch a snapshotted object while with XFS we literally copy the object
 file (usually 4MB) on the first write.



Are ceph snapshots really that much faster when using btrfs underneath? one
of the problem we have with ceph is that snapshot take/restore is insanely
slow, tens of minutes - but we are using xfs.


-- 
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com