[pve-devel] SSD only Ceph cluster

2014-08-30 Thread Martin Maurer
Hi,

I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in 
our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12 
* 512 GB drives, total net capacity of 2 TB , with a replication of 3.)

Based on features (power loss protection)  and price, the Crucial MX100 looks 
like a good candidate for this setup.

Crucial MX100
- http://www.thessdreview.com/our-reviews/crucial-mx100-ssd-review-256-512-gb/
- http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review

I will connect the SSDs on LSI SAS 9207-8i controllers, using servers and 
network like described here:
http://pve.proxmox.com/wiki/Ceph_Server#Recommended_hardware

Any other recommendations for SSDs or hints to get the best out for this?

Thanks,

Martin

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Michael Rasmussen
On Sat, 30 Aug 2014 10:59:17 +
Martin Maurer mar...@proxmox.com wrote:

 Hi,
 
 I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster 
 in our test-lab. The current plan is to use 4 x 512 GB SSD per server for 
 OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a replication of 
 3.)
 
 Based on features (power loss protection)  and price, the Crucial MX100 looks 
 like a good candidate for this setup.
 
I have very god experience with Intel and Corsair SSDs so you could
also consider these for your setup which is in the same price tag:
- Intel SSD 530 480 GB
(http://www.guru3d.com/articles-pages/intel-530-ssd-benchmark-review-test,1.html)
- Corsair Force LX SSD 512 GB
(http://hexus.net/tech/reviews/storage/71957-corsair-force-lx-ssd-512gb/)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
This supersedes all previous notices.


pgpcGct6sE9sT.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Michael Rasmussen
On Sat, 30 Aug 2014 10:59:17 +
Martin Maurer mar...@proxmox.com wrote:

 I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster 
 in our test-lab. The current plan is to use 4 x 512 GB SSD per server for 
 OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a replication of 
 3.)
 
When you are to compare specs for SSDs or HDD's for that matter to be
used in a remote storage what do you consider most important? Max
read/write MB/s or read/write IOPS?

Personally I should think when we are looking at SSD's max read/write
MB/s is irrelevant since the network will always be the bottleneck
(AFAIK no network is cable of providing throughput  400 MB/s) so I
would compare read/write IOPS instead.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Sin has many tools, but a lie is the handle which fits them all.


pgpAbFU_KEbXm.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Dietmar Maurer
 When you are to compare specs for SSDs or HDD's for that matter to be used in
 a remote storage what do you consider most important? Max read/write MB/s
 or read/write IOPS?
 
 Personally I should think when we are looking at SSD's max read/write MB/s is
 irrelevant since the network will always be the bottleneck (AFAIK no network 
 is
 cable of providing throughput  400 MB/s) so I would compare read/write IOPS
 instead.

The Crucial MX100 provides 90k/85k IOPS. Those numbers are 
from specs, so I am not sure if you can get that in reality?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Alexandre DERUMIER
Hi,

currently with firefly, you can expect around 3000-5000 iops by osd,

so any good ssd should be ok.

Recent discussion on ceph mailing list, said that they have remove a lot of 
lock and bottleneck is current master git. (with performance x5)
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html

Check this discussion, they are a lot of config samples to get the most 
performance.


I'll build a full ssd cluster next year, don't have choose ssd model yet.
(maybe intel s3500 800GB or newer models with replication x2)



- Mail original - 

De: Martin Maurer mar...@proxmox.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Samedi 30 Août 2014 12:59:17 
Objet: [pve-devel] SSD only Ceph cluster 

Hi, 

I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in 
our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12 
* 512 GB drives, total net capacity of 2 TB , with a replication of 3.) 

Based on features (power loss protection) and price, the Crucial MX100 looks 
like a good candidate for this setup. 

Crucial MX100 
- http://www.thessdreview.com/our-reviews/crucial-mx100-ssd-review-256-512-gb/ 
- http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review 

I will connect the SSDs on LSI SAS 9207-8i controllers, using servers and 
network like described here: 
http://pve.proxmox.com/wiki/Ceph_Server#Recommended_hardware 

Any other recommendations for SSDs or hints to get the best out for this? 

Thanks, 

Martin 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Dietmar Maurer
 The Crucial MX100 provides 90k/85k IOPS. Those numbers are from specs,
 so I am not sure if you can get that in reality?
 
 No, I think you can reach 90K maybe for some seconds when they are empty ;)
 
 check graph here:
 http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3
 
 It's more around 7000iops

So this is a perfect fit, considering the current ceph limitations?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add discard option to qemu drive

2014-08-30 Thread Kamil Trzciński
For me discard doesn't work with virtio disks. I did with scsi and
virtio controller. Alexandre how it works for you?


On Wed, Aug 20, 2014 at 12:25 PM, Dietmar Maurer diet...@proxmox.com wrote:
 applied, thanks!


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



-- 
Kamil Trzciński

ayu...@ayufan.eu
www.ayufan.eu
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add discard option to qemu drive

2014-08-30 Thread Alexandre DERUMIER
It's working with virto-scsi,

but you also need discard support in your guest filesystem,

and also for your storage

(should work with ceph, iscsi block storage, local raw/qcow2 on top of host 
with filesystem with discard).
It don't work with nfs, lvm(local or on top of iscsi).




- Mail original - 

De: Kamil Trzciński ayu...@ayufan.eu 
À: Dietmar Maurer diet...@proxmox.com 
Cc: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Samedi 30 Août 2014 17:35:54 
Objet: Re: [pve-devel] [PATCH] add discard option to qemu drive 

For me discard doesn't work with virtio disks. I did with scsi and 
virtio controller. Alexandre how it works for you? 


On Wed, Aug 20, 2014 at 12:25 PM, Dietmar Maurer diet...@proxmox.com wrote: 
 applied, thanks! 
 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 



-- 
Kamil Trzciński 

ayu...@ayufan.eu 
www.ayufan.eu 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel