We are using the following combination of SSD/HDD for our DRBD cluster and so 
far it works pretty well.

primary: 8x Intel SSD 320, 300GB  (in RAID6)
secondary: 8x WD 7.2k NL-SAS, 1TB   (in RAID6)

So all the READ requests willl be handled by the SSDs, while the WRITE has to 
be done by both of course.

This cluster is used solely as a mail-server backend for ~30k mailboxes, thus 
in general we see much much more READ IOPs than WRITE. (this was also the 
decider, why we even considered to use such a performance-asynchron disk setup)

In our production environment this works out as expected and using the cluster 
this way around (SSD as primary) delivers around 5-10x the performance as vice 
versa. We did test this with loading mdir mailboxes with 20k+ messages in a 
webmail interface that fetches ALL the messages from the server and then sorts 
in in the frontend afterwards.
The loading time of such boxes, while SSD is primary, are by magnitudes 
faster....i.e. 3-4 seconds vs 20-30 (of course while system was in production 
and thousands of users online)

Still, as long as the expected workload will not be massively dominated by READ 
IOPs, I also have to aggree that SSD vs 7.2k SATA/SAS may not be the best setup 
to use overall.

regards
Christoph 



Thursday, December 20, 2012, 1:43:39 PM, you wrote:

> Thanks, James, you seem to keep forgetting the list CC though ;)

> -------- Original Message --------
> Subject: Re: [DRBD-user] Secondary Performance
> Date: Thu, 20 Dec 2012 11:40:27 +0000
> From: Prater, James K. <[email protected]>
> To: '[email protected]' <[email protected]>

> What Felix suggested (switching roles) should work.  Personally,  I
> would not use SSDs for that type of deploy.  It is best placed to speed
> up certain processes (swap, scratch pad database location used for
> indexes or anythi


> [email protected]
> http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to