Hi,
I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in
our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12
* 512 GB drives, total net capacity of 2 TB , with a replication of 3.)
Based on features (power loss protection) and price, the
On Sat, 30 Aug 2014 10:59:17 +
Martin Maurer mar...@proxmox.com wrote:
Hi,
I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster
in our test-lab. The current plan is to use 4 x 512 GB SSD per server for
OSD. (12 * 512 GB drives, total net capacity of 2 TB , with
On Sat, 30 Aug 2014 10:59:17 +
Martin Maurer mar...@proxmox.com wrote:
I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster
in our test-lab. The current plan is to use 4 x 512 GB SSD per server for
OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a
When you are to compare specs for SSDs or HDD's for that matter to be used in
a remote storage what do you consider most important? Max read/write MB/s
or read/write IOPS?
Personally I should think when we are looking at SSD's max read/write MB/s is
irrelevant since the network will always
Hi,
currently with firefly, you can expect around 3000-5000 iops by osd,
so any good ssd should be ok.
Recent discussion on ceph mailing list, said that they have remove a lot of
lock and bottleneck is current master git. (with performance x5)
The Crucial MX100 provides 90k/85k IOPS. Those numbers are from specs,
so I am not sure if you can get that in reality?
No, I think you can reach 90K maybe for some seconds when they are empty ;)
check graph here:
http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3
For me discard doesn't work with virtio disks. I did with scsi and
virtio controller. Alexandre how it works for you?
On Wed, Aug 20, 2014 at 12:25 PM, Dietmar Maurer diet...@proxmox.com wrote:
applied, thanks!
___
pve-devel mailing list
It's working with virto-scsi,
but you also need discard support in your guest filesystem,
and also for your storage
(should work with ceph, iscsi block storage, local raw/qcow2 on top of host
with filesystem with discard).
It don't work with nfs, lvm(local or on top of iscsi).
- Mail