fio benchmark will give you raw device performance bypassing filesystem.

So maybe the problem is in xfs or linux vfs layer.

I think you need to bench the filesystem to compare performance


----- Mail original ----- 

De: "Stefan Priebe - Profihost AG" <[email protected]> 
À: "Alexandre DERUMIER" <[email protected]> 
Cc: [email protected], "Mark Nelson" <[email protected]> 
Envoyé: Mardi 29 Mai 2012 10:22:34 
Objet: Re: poor OSD performance using kernel 3.4 

Am 29.05.2012 05:54, schrieb Alexandre DERUMIER: 
>>> This happens with ext4 or btrfs too. 
> 
> maybe this is related to io scheduler ? 
> did you have compared cfq,deadline,noop scheduler ? 

This is something i consider for performance tuning later on, when 
everything is running smooth. Right now i'm using CFQ with the tuned IBM 
settings (which proxmox uses too). 


Here are some outputs of basic fio Tests running on 3.4 and 3.0. 

3.4: http://pastebin.com/raw.php?i=6GEKsCYH 
3.0: http://pastebin.com/raw.php?i=FU4AtUck 

strangely 3.4 is faster but this corresponds to the fact that the normal 
Disk I/O is working fine with 3.4 It's just ceph which isn't working fine. 

> also what's is your sas/sata controller ? 
Intel onboard SATA controller in this testsetup. 

Stefan 



-- 

-- 




        Alexandre D erumier 
Ingénieur Système 
Fixe : 03 20 68 88 90 
Fax : 03 20 68 90 81 
45 Bvd du Général Leclerc 59100 Roubaix - France 
12 rue Marivaux 75002 Paris - France 
        
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to