[pve-devel] SSD only Ceph cluster

2014-08-30 Thread Martin Maurer
Hi, I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a replication of 3.) Based on features (power loss protection) and price, the

Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Michael Rasmussen
On Sat, 30 Aug 2014 10:59:17 + Martin Maurer mar...@proxmox.com wrote: Hi, I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12 * 512 GB drives, total net capacity of 2 TB , with

Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Michael Rasmussen
On Sat, 30 Aug 2014 10:59:17 + Martin Maurer mar...@proxmox.com wrote: I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a

Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Dietmar Maurer
When you are to compare specs for SSDs or HDD's for that matter to be used in a remote storage what do you consider most important? Max read/write MB/s or read/write IOPS? Personally I should think when we are looking at SSD's max read/write MB/s is irrelevant since the network will always

Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Alexandre DERUMIER
Hi, currently with firefly, you can expect around 3000-5000 iops by osd, so any good ssd should be ok. Recent discussion on ceph mailing list, said that they have remove a lot of lock and bottleneck is current master git. (with performance x5)

Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Dietmar Maurer
The Crucial MX100 provides 90k/85k IOPS. Those numbers are from specs, so I am not sure if you can get that in reality? No, I think you can reach 90K maybe for some seconds when they are empty ;) check graph here: http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3

Re: [pve-devel] [PATCH] add discard option to qemu drive

2014-08-30 Thread Kamil TrzciƄski
For me discard doesn't work with virtio disks. I did with scsi and virtio controller. Alexandre how it works for you? On Wed, Aug 20, 2014 at 12:25 PM, Dietmar Maurer diet...@proxmox.com wrote: applied, thanks! ___ pve-devel mailing list

Re: [pve-devel] [PATCH] add discard option to qemu drive

2014-08-30 Thread Alexandre DERUMIER
It's working with virto-scsi, but you also need discard support in your guest filesystem, and also for your storage (should work with ceph, iscsi block storage, local raw/qcow2 on top of host with filesystem with discard). It don't work with nfs, lvm(local or on top of iscsi). - Mail