Re: [pve-devel] SSD only Ceph cluster

2014-09-01 Thread Alexandre DERUMIER
Yes but with hacks like turn of the crush update on start etc.. 
http://wiki.ceph.com/Planning/Blueprints/Giant/crush_extension_for_more_flexible_object_placement
 
looks like they will improve it. 

Oh, great :) thanks for the link !


- Mail original - 

De: VELARTIS Philipp Dürhammer p.duerham...@velartis.at 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Lundi 1 Septembre 2014 16:47:59 
Objet: AW: AW: [pve-devel] SSD only Ceph cluster 

Yes but with hacks like turn of the crush update on start etc.. 
http://wiki.ceph.com/Planning/Blueprints/Giant/crush_extension_for_more_flexible_object_placement
 
looks like they will improve it. 

-Ursprüngliche Nachricht- 
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
Gesendet: Montag, 01. September 2014 16:43 
An: VELARTIS Philipp Dürhammer 
Cc: pve-devel@pve.proxmox.com; Dietmar Maurer 
Betreff: Re: AW: [pve-devel] SSD only Ceph cluster 

Yes :-) and next release will have fully supported possibility to have 
different roots on hosts. 
For example one for ssd and one for spinners. (which was possible 
right now but not very usable) For me it is a lot better to have a big 
pool with spinners and a separate pool with fast ssds... without the 
need to have a least 6 or more osd servers 

I think it's already possible, editing the crushmap manually. 

see: 

http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
 


- Mail original - 

De: VELARTIS Philipp Dürhammer p.duerham...@velartis.at 
À: Alexandre DERUMIER aderum...@odiso.com, Dietmar Maurer 
diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 1 Septembre 2014 16:33:17 
Objet: AW: [pve-devel] SSD only Ceph cluster 

Yes :-) and next release will have fully supported possibility to have 
different roots on hosts. 
For example one for ssd and one for spinners. (which was possible right now but 
not very usable) For me it is a lot better to have a big pool with spinners and 
a separate pool with fast ssds... without the need to have a least 6 or more 
osd servers 

-Ursprüngliche Nachricht- 
Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
Alexandre DERUMIER 
Gesendet: Samstag, 30. August 2014 17:58 
An: Dietmar Maurer 
Cc: pve-devel@pve.proxmox.com 
Betreff: Re: [pve-devel] SSD only Ceph cluster 

So this is a perfect fit, considering the current ceph limitations? 

Yes, sure! 


I known also that firefly have limitation in the read memory cache, around 
25000iops by node. 
Seem that master ceph git has resolved that too :) 

Can't wait for Giant release :) 



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com 
Envoyé: Samedi 30 Août 2014 17:06:23 
Objet: RE: [pve-devel] SSD only Ceph cluster 

 The Crucial MX100 provides 90k/85k IOPS. Those numbers are from 
 specs, so I am not sure if you can get that in reality? 
 
 No, I think you can reach 90K maybe for some seconds when they are 
 empty ;) 
 
 check graph here: 
 http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3 
 
 It's more around 7000iops 

So this is a perfect fit, considering the current ceph limitations? 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Michael Rasmussen
On Sat, 30 Aug 2014 10:59:17 +
Martin Maurer mar...@proxmox.com wrote:

 Hi,
 
 I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster 
 in our test-lab. The current plan is to use 4 x 512 GB SSD per server for 
 OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a replication of 
 3.)
 
 Based on features (power loss protection)  and price, the Crucial MX100 looks 
 like a good candidate for this setup.
 
I have very god experience with Intel and Corsair SSDs so you could
also consider these for your setup which is in the same price tag:
- Intel SSD 530 480 GB
(http://www.guru3d.com/articles-pages/intel-530-ssd-benchmark-review-test,1.html)
- Corsair Force LX SSD 512 GB
(http://hexus.net/tech/reviews/storage/71957-corsair-force-lx-ssd-512gb/)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
This supersedes all previous notices.


pgpcGct6sE9sT.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Michael Rasmussen
On Sat, 30 Aug 2014 10:59:17 +
Martin Maurer mar...@proxmox.com wrote:

 I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster 
 in our test-lab. The current plan is to use 4 x 512 GB SSD per server for 
 OSD. (12 * 512 GB drives, total net capacity of 2 TB , with a replication of 
 3.)
 
When you are to compare specs for SSDs or HDD's for that matter to be
used in a remote storage what do you consider most important? Max
read/write MB/s or read/write IOPS?

Personally I should think when we are looking at SSD's max read/write
MB/s is irrelevant since the network will always be the bottleneck
(AFAIK no network is cable of providing throughput  400 MB/s) so I
would compare read/write IOPS instead.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Sin has many tools, but a lie is the handle which fits them all.


pgpAbFU_KEbXm.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Dietmar Maurer
 When you are to compare specs for SSDs or HDD's for that matter to be used in
 a remote storage what do you consider most important? Max read/write MB/s
 or read/write IOPS?
 
 Personally I should think when we are looking at SSD's max read/write MB/s is
 irrelevant since the network will always be the bottleneck (AFAIK no network 
 is
 cable of providing throughput  400 MB/s) so I would compare read/write IOPS
 instead.

The Crucial MX100 provides 90k/85k IOPS. Those numbers are 
from specs, so I am not sure if you can get that in reality?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Alexandre DERUMIER
Hi,

currently with firefly, you can expect around 3000-5000 iops by osd,

so any good ssd should be ok.

Recent discussion on ceph mailing list, said that they have remove a lot of 
lock and bottleneck is current master git. (with performance x5)
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html

Check this discussion, they are a lot of config samples to get the most 
performance.


I'll build a full ssd cluster next year, don't have choose ssd model yet.
(maybe intel s3500 800GB or newer models with replication x2)



- Mail original - 

De: Martin Maurer mar...@proxmox.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Samedi 30 Août 2014 12:59:17 
Objet: [pve-devel] SSD only Ceph cluster 

Hi, 

I am looking for the best suitable SSD drives for a Proxmox VE Ceph cluster in 
our test-lab. The current plan is to use 4 x 512 GB SSD per server for OSD. (12 
* 512 GB drives, total net capacity of 2 TB , with a replication of 3.) 

Based on features (power loss protection) and price, the Crucial MX100 looks 
like a good candidate for this setup. 

Crucial MX100 
- http://www.thessdreview.com/our-reviews/crucial-mx100-ssd-review-256-512-gb/ 
- http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review 

I will connect the SSDs on LSI SAS 9207-8i controllers, using servers and 
network like described here: 
http://pve.proxmox.com/wiki/Ceph_Server#Recommended_hardware 

Any other recommendations for SSDs or hints to get the best out for this? 

Thanks, 

Martin 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] SSD only Ceph cluster

2014-08-30 Thread Dietmar Maurer
 The Crucial MX100 provides 90k/85k IOPS. Those numbers are from specs,
 so I am not sure if you can get that in reality?
 
 No, I think you can reach 90K maybe for some seconds when they are empty ;)
 
 check graph here:
 http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3
 
 It's more around 7000iops

So this is a perfect fit, considering the current ceph limitations?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel