RE: NFS Config vs Ceph / GlusterFS

2022-04-15 Thread Marc
> 
> > That is why I am using mdbox files of 4MB. I hope that should give me hardly
> any write amplification. I am also seperating between ssd and hdd pools by 
> auto
> archiving email to the hdd pools
> > I am using rbd. After luminuous I had some issues with the cephfs and do not
> want to store operational stuff on it yet.
> 
> I am very interested in that setup, since I am currently planning to
> reshape my cluster in a similar way (currently from sole distribution
> via director to distribution + HA). 

Currently I am running just one instance. Hyper converged on ceph. I had once a 
lsi card fail in a ceph node and strangely enough it took with it 2nd lsi card 
in that node. So all drives were down in the ceph node, could not log in (os 
drive also down). Yet all vm's on that ceph node kept running without any 
issues. Really, really nice.

> Could you post a short overview

I am trying to stick as much as possible to defaults and only when required 
seek some tuning. For now I am able to run with such 'defaults'.

> (scheme) and some important configurations of your setup? Did you do any
> performancetesting?

Yes I tried a bit of testing but did not have to much time to do it properly. 
Look though at how the cephfs performs.

storage description mailbox typetest message size   test 
type   messages processed  ms/cmd avg
imaptest1   mail04 vdb  mbox64kbappend  2197800.7
imaptest3   mail04 lvm vdb  mbox64kbappend  2899833.3
imaptest4   mail04 lvm vdb, added 2nd cpu   mbox64kbappend  
2921826.9
imaptest5   mail04 cephfs (mtu 1500)mbox64kbappend  
4123585.0
imaptest14  mail04 cephfs nocrc (mtu 8900)  mbox64kbappend  
4329555.6
imaptest10  mail04 cephfs (mtu 8900)mdbox   64kbappend  
2754875.2
imaptest16  mail04 cephfs (mtu 8900) 2 mounts idx store mdbox   64kb
append  2767
imaptest6   mail04 lvm vdb (4M) mdbox   64kbappend  1978
1,244.6
imaptest7   mail04 lvm vdb (16M)mdbox   64kbappend  2021
1,193.7
imaptest8   mail04 lvm vdb (4M) mdbox zlib  64kbappend  
11452,240.4
imaptest9   mail04 lvm vdb (4M) mdbox zlib  1kb append  
345 7,545.8
imaptest11  mail04 lvm sda  mbox64kbappend  4117586.8
imaptest12  mail04 vda  mbox64kbappend  3365716.2
imaptest13  mail04 lvm sda dual testmbox64kbappend  
2175 (4350) 580.8 (1,161.7)
imaptest15  mail03 lvm sdd  mbox64kbappend  20850   119.2
imaptest17  mail04 ceph hdd 3x  rbox64kbappend  2519
1003
imaptest18  mail04 ceph ssd 3x  mbox64kbappend  31474   
imaptest19  mail04 ceph ssd 3x  mdbox   64kbappend  15706   

imaptest1   mail04 vdb  mbox64kbappend  2197800.7
imaptest3   mail04 lvm vdb  mbox64kbappend  2899833.3
imaptest4   mail04 lvm vdb, added 2nd cpu   mbox64kbappend  
2921826.9
imaptest5   mail04 cephfs (mtu 1500)mbox64kbappend  
4123585.0
imaptest14  mail04 cephfs nocrc (mtu 8900)  mbox64kbappend  
4329555.6
imaptest10  mail04 cephfs (mtu 8900)mdbox   64kbappend  
2754875.2
imaptest16  mail04 cephfs (mtu 8900) 2 mounts idx store mdbox   64kb
append  2767
imaptest6   mail04 lvm vdb (4M) mdbox   64kbappend  1978
1,244.6
imaptest7   mail04 lvm vdb (16M)mdbox   64kbappend  2021
1,193.7
imaptest8   mail04 lvm vdb (4M) mdbox zlib  64kbappend  
11452,240.4
imaptest9   mail04 lvm vdb (4M) mdbox zlib  1kb append  
345 7,545.8
imaptest11  mail04 lvm sda  mbox64kbappend  4117586.8
imaptest12  mail04 vda  mbox64kbappend  3365716.2
imaptest13  mail04 lvm sda dual testmbox64kbappend  
2175 (4350) 580.8 (1,161.7)
imaptest15  mail03 lvm sdd  mbox64kbappend  20850   119.2
imaptest16  mail06  rbox64kbappend  2755917
imaptest17  mail06 (index on rbd ssd)   rbox64kbappend  
4862492
imaptest18  mail06 (index on rbd ssd)   rbox64kbappend  
4803
imaptest19  mail06 (index on rbd ssd)   rbox64kbappend  
9055272
imaptest20  mail06 (index on rbd ssd)   rbox64kbappend  
8731276
imaptest20  mail06 (index on rbd ssd)   rbox62kb-txtappend  
11315   212
imaptest21  mail06 (index on rbd ssd, compression)  rbox64kbappend  
8298290

NFS Config vs Ceph / GlusterFS

2022-04-06 Thread Malte Schmidt



That is why I am using mdbox files of 4MB. I hope that should give me hardly 
any write amplification. I am also seperating between ssd and hdd pools by auto 
archiving email to the hdd pools
I am using rbd. After luminuous I had some issues with the cephfs and do not 
want to store operational stuff on it yet.


I am very interested in that setup, since I am currently planning to 
reshape my cluster in a similar way (currently from sole distribution 
via director to distribution + HA). Could you post a short overview 
(scheme) and some important configurations of your setup? Did you do any 
performancetesting? Also, when you say rbd in a clustered context, is 
that one block device per node while the director still spreads the 
accounts over the nodes?


Thanks in advance,

M. Schmidt



OpenPGP_signature
Description: OpenPGP digital signature


RE: NFS Config vs Ceph / GlusterFS

2022-04-06 Thread Marc


> 
> I have about 100TB of mailboxes in Maildir format on NFS (NetApp FAS)
> and works very well, for performance but also stability.

Hmmm, I would like to read something else. Eg that the design/elementary 
properties of distributed storage result into that all such systems are 
performing about the same.
Maybe there should be more focus on ceph performance development instead of 
this cephadm?

> The main problem of using Ceph or GlusterFS to store Maildir is the high
> use of metadata that dovecot require for check new messages and others
> activity. On my storage/NFS the main part of the traffic and I/O is
> metadata traffic on small file (high file count workload).

That is why I am using mdbox files of 4MB. I hope that should give me hardly 
any write amplification. I am also seperating between ssd and hdd pools by auto 
archiving email to the hdd pools

> 
> And Ceph or GlusterFS are very inefficient with this kind of workload
> (many metadata GETATTR/ACCESS/LOOKUP and high numer of small files).

I am using rbd. After luminuous I had some issues with the cephfs and do not 
want to store operational stuff on it yet.



Re: NFS Config vs Ceph / GlusterFS

2022-04-06 Thread Alessio Cecchi

Hi,

I have about 100TB of mailboxes in Maildir format on NFS (NetApp FAS) 
and works very well, for performance but also stability.


The main problem of using Ceph or GlusterFS to store Maildir is the high 
use of metadata that dovecot require for check new messages and others 
activity. On my storage/NFS the main part of the traffic and I/O is 
metadata traffic on small file (high file count workload).


And Ceph or GlusterFS are very inefficient with this kind of workload 
(many metadata GETATTR/ACCESS/LOOKUP and high numer of small files).


Ciao

Il 05/04/22 01:40, dove...@ptld.com ha scritto:

Do all of the configuration considerations pertaining to using NFS on

 https://doc.dovecot.org/configuration_manual/nfs/

equally apply to using something like Ceph / GlusterFS?


And if people wouldn't mind chiming in with which (NFS, Ceph & GlusterFS) they 
feel is better for maildir mail storage on dedicated non-container servers?
Which is better for robustness / stability?
Which is better for speed / performance?


Thank you.


NFS Config vs Ceph / GlusterFS

2022-04-04 Thread dovecot
Do all of the configuration considerations pertaining to using NFS on

https://doc.dovecot.org/configuration_manual/nfs/

equally apply to using something like Ceph / GlusterFS?


And if people wouldn't mind chiming in with which (NFS, Ceph & GlusterFS) they 
feel is better for maildir mail storage on dedicated non-container servers?
Which is better for robustness / stability?
Which is better for speed / performance?


Thank you.