Re: [Dovecot] GlusterFS + Dovecot

2012-06-21 Thread Stan Hoeppner
On 6/20/2012 10:50 AM, Romer Ventura wrote:

 Has anyone used GlusterFS as storage file system for dovecot or any other
 email system?

I have not, but can tell you from experience and education that
distributed filesystems don't work well with transactional workloads
such as IMAP and SMTP.  The two reasons are high latency and problems
with file locking, as Timo mentioned.

Instead of asking if anyone here has tried to use GlusterFS, why not
describe your situation and ask for advice on a solution?  That usually
works much better, and you gain valuable insight.

-- 
Stan


Re: [Dovecot] GlusterFS + Dovecot

2012-06-21 Thread Robert Schetterer
Am 20.06.2012 17:50, schrieb Romer Ventura:
 Hello,
 
  
 
 Has anyone used GlusterFS as storage file system for dovecot or any other
 email system..?
 
  
 
 It says that it can be presented as a NFS, CIFS and as GlusterFS using the
 native client, technically using the client would allow the machine to read
 and write to it, therefore, I think that Dovecot would not care about it.
 Correct?
 
  
 
 Anyone out there used this setup??
 
  
 
 Thanks.
 
 

reading the faqs i wouldnt recommend it yet, but as Timo said
try with performance tests first

-- 
Best Regards
MfG Robert Schetterer




Re: [Dovecot] GlusterFS + Dovecot

2012-06-21 Thread Костырев Александр Алексеевич
We've considered using gluster for our mail storage a month ago.
I've seen 
 index corruption even if mail was delivered by lmtp sequentially
 some split-brains with no clear reason
 with more than 2000 mails in box we had to wait for 40sec to open mailbox 
through roundcube, so
we've decided to go for dsync replication instead with
common mysql database for user storage and imap/pop3/lmtp proxy.




-Original Message-
From: dovecot-boun...@dovecot.org [mailto:dovecot-boun...@dovecot.org] On 
Behalf Of Romer Ventura
Sent: Thursday, June 21, 2012 2:51 AM
To: dovecot@dovecot.org
Subject: [Dovecot] GlusterFS + Dovecot

Hello,

 

Has anyone used GlusterFS as storage file system for dovecot or any other
email system..?

 

It says that it can be presented as a NFS, CIFS and as GlusterFS using the
native client, technically using the client would allow the machine to read
and write to it, therefore, I think that Dovecot would not care about it.
Correct?

 

Anyone out there used this setup??

 

Thanks.



[Dovecot] GlusterFS + Dovecot

2012-06-20 Thread Romer Ventura
Hello,

 

Has anyone used GlusterFS as storage file system for dovecot or any other
email system..?

 

It says that it can be presented as a NFS, CIFS and as GlusterFS using the
native client, technically using the client would allow the machine to read
and write to it, therefore, I think that Dovecot would not care about it.
Correct?

 

Anyone out there used this setup??

 

Thanks.



Re: [Dovecot] GlusterFS + Dovecot

2012-06-20 Thread Timo Sirainen
On 20.6.2012, at 18.50, Romer Ventura wrote:

 Has anyone used GlusterFS as storage file system for dovecot or any other
 email system..?

I've heard Dovecot complains about index corruption once in a while with 
glusterfs, even when not in multi-master mode. I wouldn't use it without some 
heavy stress testing first (with imaptest tool).



Re: [Dovecot] GlusterFs - Any new progress reports?

2010-03-18 Thread Jan-Frode Myklebust
On 2010-02-17, Ed W li...@wildgooses.com wrote:

 Anyone had success using some other clustered/HA filestore with dovecot 
 who can share their experience? (OCFS/GFS over DRBD, etc?)

We´ve been using IBM´s GPFS filesystem on (currently) seven x-series
servers running RHEL4 and RHEL5, all SAN-attached all serving the same
filesystem for probably 4 years now. This systems serves POP/IMAP/Webmail
to ~700.000 mail accounts. Webmail is sticky, while POP/IMAP is being
distributed over all the servers by HAproxy.

It´s been working very well. There´s been some minor issues with dovecots
locking that forced us to be less parallell in the deliveries than we
wanted to, but that´s probably our own fault for being quite back-level
on dovecot.

The biggest pain is doing file backups of the maildirs... 


  -jf



Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-22 Thread Ed W



I use GlusterFS with Dovecot and it works without issues. The GlusterFS team 
has made huge progress since 2.0 and with the new 3.0 version they have again 
proved that GlusterFS can get better.
   


You have kindly shared some details of your config before - care to 
update us on what you are using now, how much storage, how many 
deliveries/hour, IOPS, etc?  Lots of stuff was quite hard work for you 
back with Glusterfs v2, what kind of stuff did you need to work around 
with v3? (I can't believe it worked out of the box!)  Any notes for 
users with small office sized setups (ie 2 servers' ish).


I presume you use gentoo on your gluster machines? Do you run gluster 
only on the storage machines or do you virtualise and use the spare CPU 
to run other services? (given the price of electricity it seems a shame 
not to load servers up these days...)


Thanks

Ed W


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-19 Thread John Lyons
  Sure .. but you can break the index files in exactly the same way as
  with NFS. :)
  
 That is right :)

For us, all the front end exim servers pass their mail to a single final
delivery server. It was done so that we didn't have all the front end
servers needing to mount the storage. It also means that if we need to
stop local delivery for any reason we're only stopping one exim server.

The NFS issue is resolved (I think/hope) by having the front end load
balancer use persistent connections to the dovecot servers.

All I can say is we've used dovecot since it was a little nipper and
have never had any issues with indexes.

Regards

John
www.netserve.co.uk




Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Steve

 Original-Nachricht 
 Datum: Wed, 17 Feb 2010 21:25:46 -0600
 Von: Eric Rostetter rostet...@mail.utexas.edu
 An: dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 Quoting Ed W li...@wildgooses.com:
 
  Anyone had success using some other clustered/HA filestore with  
  dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)
 
 GFS2 over DRBD in an active-active setup works fine IMHO.   Not perfect,
 but it was cheap and works well...  Let's me reboot machines with
 no downtime which was one of my main goals when implementing it...
 
  My interest is more in bootstrapping a more highly available system  
  from lower quality (commodity) components than very high end use
 
 GFS+DRBD should fit the bill...  You need several nics and cables,
 but they are dirt cheap...  Just 2 machines with the same disk setup,
 and a handful of nics and cables, and you are off and running...
 
Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is it 
possible to aggregate the speed when using many nodes? Can all the nodes at the 
same time be active or is one node always the master and the other a hot spare 
that kicks in when the master is down?


  Thanks
 
  Ed W
 
 -- 
 Eric Rostetter
 The Department of Physics
 The University of Texas at Austin
 
 Go Longhorns!

-- 
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser -
jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Brandon Lamb
On Wed, Feb 17, 2010 at 11:55 AM, Steve stev...@gmx.net wrote:

  Original-Nachricht 
 Datum: Wed, 17 Feb 2010 20:15:30 +0100
 Von: alex handle alex.han...@gmail.com
 An: Dovecot Mailing List dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 
  Anyone had success using some other clustered/HA filestore with dovecot
 who
  can share their experience? (OCFS/GFS over DRBD, etc?)
 
  My interest is more in bootstrapping a more highly available system from
  lower quality (commodity) components than very high end use

 we use drbd with ext3 in a active/passive setup for more than 1
 mailboxes.
 works like a charm!

 I'm not really trusting cluster filesystems and most cluster
 filesystems are not made for small
 files.

 I use GlusterFS with Dovecot and it works without issues. The GlusterFS team 
 has made huge progress since 2.0 and with the new 3.0 version they have again 
 proved that GlusterFS can get better.


 Alex

 Steve

Hi Steve,

I was wondering if perhaps I might snag a copy of your glusterfs
server/client configs to see what you are doing? I am interested in
using it in our mail setup, but last I tried a little over a month ago
I got a bunch of corrupted mails, so far I am only using for a web
cluster and that seems to be working but different use case I guess.

Thanks!

Brandon


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Steve

 Original-Nachricht 
 Datum: Thu, 18 Feb 2010 08:36:36 -0800
 Von: Brandon Lamb brandonl...@gmail.com
 An: Dovecot Mailing List dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 On Wed, Feb 17, 2010 at 11:55 AM, Steve stev...@gmx.net wrote:
 
   Original-Nachricht 
  Datum: Wed, 17 Feb 2010 20:15:30 +0100
  Von: alex handle alex.han...@gmail.com
  An: Dovecot Mailing List dovecot@dovecot.org
  Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
 
  
   Anyone had success using some other clustered/HA filestore with
 dovecot
  who
   can share their experience? (OCFS/GFS over DRBD, etc?)
  
   My interest is more in bootstrapping a more highly available system
 from
   lower quality (commodity) components than very high end use
 
  we use drbd with ext3 in a active/passive setup for more than 1
  mailboxes.
  works like a charm!
 
  I'm not really trusting cluster filesystems and most cluster
  filesystems are not made for small
  files.
 
  I use GlusterFS with Dovecot and it works without issues. The GlusterFS
 team has made huge progress since 2.0 and with the new 3.0 version they
 have again proved that GlusterFS can get better.
 
 
  Alex
 
  Steve
 
 Hi Steve,
 
 I was wondering if perhaps I might snag a copy of your glusterfs
 server/client configs to see what you are doing? I am interested in
 using it in our mail setup, but last I tried a little over a month ago
 I got a bunch of corrupted mails, so far I am only using for a web
 cluster and that seems to be working but different use case I guess.
 
Server part:

volume gfs-srv-ds
  type storage/posix
  option directory /mnt/glusterfs/mailstore01
end-volume

volume gfs-srv-ds-locks
  type features/locks
  option mandatory-locks off
  subvolumes gfs-srv-ds
end-volume

volume gfs-srv-ds-remote
  type protocol/client
  option transport-type tcp
  # option username
  # option password
  option remote-host 192.168.0.142
  option remote-port 6998
  option frame-timeout 600
  option ping-timeout 10
  option remote-subvolume gfs-srv-ds-locks
end-volume

volume gfs-srv-ds-replicate
  type cluster/replicate
  option data-self-heal on
  option metadata-self-heal on
  option entry-self-heal on
  # option read-subvolume gfs-srv-ds-locks
  # option favorite-child
  option data-change-log on
  option metadata-change-log on
  option entry-change-log on
  option data-lock-server-count 1
  option metadata-lock-server-count 1
  option entry-lock-server-count 1
  subvolumes gfs-srv-ds-locks gfs-srv-ds-remote
end-volume

volume gfs-srv-ds-io-threads
  type performance/io-threads
  option thread-count 16
  subvolumes gfs-srv-ds-replicate
end-volume

volume gfs-srv-ds-write-back
  type performance/write-behind
  option cache-size 64MB
  option flush-behind on
  # opiton disable-for-first-nbytes 1
  # option enable-O_SYNC false
  subvolumes gfs-srv-ds-io-threads
end-volume

volume gfs-srv-ds-io-cache
  type performance/io-cache
  option cache-size 32MB
  option priority *:0
  option cache-timeout 2
  subvolumes gfs-srv-ds-write-back
end-volume

volume gfs-srv-ds-server
  type protocol/server
  option transport-type tcp
  option transport.socket.listen-port 6998
  option auth.addr.gfs-srv-ds-locks.allow 192.168.0.*,127.0.0.1
  option auth.addr.gfs-srv-ds-io-threads.allow 192.168.0.*,127.0.0.1
  option auth.addr.gfs-srv-ds-io-cache.allow 192.168.0.*,127.0.0.1
  subvolumes gfs-srv-ds-io-cache
end-volume



Client part:

volume gfs-cli-ds-client
  type protocol/client
  option transport-type tcp
  # option remote-host gfs-vu-mailstore-c01.vunet.local
  option remote-host 127.0.0.1
  option remote-port 6998
  option frame-timeout 600
  option ping-timeout 10
  option remote-subvolume gfs-srv-ds-io-cache
end-volume

#volume gfs-cli-ds-write-back
#  type performance/write-behind
#  option cache-size 64MB
#  option flush-behind on
#  # opiton disable-for-first-nbytes 1
#  # option enable-O_SYNC false
#  subvolumes gfs-cli-ds-client
#end-volume

#volume gfs-cli-ds-io-cache
#  type performance/io-cache
#  option cache-size 32MB
#  option priority *:0
#  option cache-timeout 1
#  subvolumes gfs-cli-ds-write-back
#end-volume



 Thanks!
 
 Brandon

-- 
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser -
jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Eric Rostetter

Quoting Steve stev...@gmx.net:


 My interest is more in bootstrapping a more highly available system
 from lower quality (commodity) components than very high end use

GFS+DRBD should fit the bill...  You need several nics and cables,
but they are dirt cheap...  Just 2 machines with the same disk setup,
and a handful of nics and cables, and you are off and running...


Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is


Not really, no.  You can have those two nodes distribute it out via
gnbd though...  Red Hat claims it scales well, but I've not yet tested
it...


Can all the
nodes at the same time be active or is one node always the master  
and the other a hot spare that kicks in when the master is down?


The free version of DRBD only supports max 2 nodes.  They can be active-active
or active-passive.

The non-free version is supposed to support 3 nodes, but I've heard  
conflicting

reports on what the 3rd node can do...  You'd have to investigate that
yourself...  I'm not interested in it, since I don't want to pay for it...
(Though I am willing to donate to the project)

My proposed solution to the more-than-two-nodes is gnbd...

If that doesn't meet your needs, then DRBD probably isn't the proper choice.
You didn't mention anything about number of nodes in your original post, IIRC.


 Thanks

 Ed W


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

Go Longhorns!


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Steve

 Original-Nachricht 
 Datum: Thu, 18 Feb 2010 13:51:33 -0600
 Von: Eric Rostetter rostet...@mail.utexas.edu
 An: dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 Quoting Steve stev...@gmx.net:
 
   My interest is more in bootstrapping a more highly available system
   from lower quality (commodity) components than very high end use
 
  GFS+DRBD should fit the bill...  You need several nics and cables,
  but they are dirt cheap...  Just 2 machines with the same disk setup,
  and a handful of nics and cables, and you are off and running...
 
  Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is
 
 Not really, no.  You can have those two nodes distribute it out via
 gnbd though...  Red Hat claims it scales well, but I've not yet tested
 it...
 
I have already installed GFS on a cluster in the past, but never on DRBD.


  Can all the
  nodes at the same time be active or is one node always the master  
  and the other a hot spare that kicks in when the master is down?
 
 The free version of DRBD only supports max 2 nodes.  They can be
 active-active
 or active-passive.
 
 The non-free version is supposed to support 3 nodes, but I've heard  
 conflicting
 reports on what the 3rd node can do...  You'd have to investigate that
 yourself...  I'm not interested in it, since I don't want to pay for it...
 (Though I am willing to donate to the project)
 
Hmm... when I started with GlusterFS I thought that using more then two nodes 
is something that I will never need. But now that I have GlusterFS up and 
running and I am using more then two nodes I really see a benefit in being able 
to use more then two nodes. For me this is a big advantage of GlusterFS 
compared to DRBD.


 My proposed solution to the more-than-two-nodes is gnbd...
 
Never heard of it before. Don't like the fact that I need to patch the Kernel 
in order to get it working.


 If that doesn't meet your needs, then DRBD probably isn't the proper
 choice.
 You didn't mention anything about number of nodes in your original post,
 IIRC.
 
I did not post the original post. I just responded to the original post saying 
that GlusterFS works for me.


   Thanks
  
   Ed W
 
 -- 
 Eric Rostetter
 The Department of Physics
 The University of Texas at Austin
 
 Go Longhorns!

-- 
NEU: Mit GMX DSL über 1000,- ¿ sparen!
http://portal.gmx.net/de/go/dsl02


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Eric Rostetter

Quoting Steve stev...@gmx.net:


I have already installed GFS on a cluster in the past, but never on DRBD.


Me too (I did in on a real physical SAN before).

Hmm... when I started with GlusterFS I thought that using more then  
two nodes is something that I will never need.


GlusterFS is really designed to allow such things...  So is GFS.  But
these are filesystems...

DRBD isn't really designed to scale this way.  A SAN or NAS is.

But now that I have GlusterFS up and running and I am using more  
then two nodes I really see a benefit in being able to use more then  
two nodes. For me this is a big advantage of GlusterFS compared to  
DRBD.


You are comparing filesystems to storage/mirroring systems.  Not a
valid comparison...


My proposed solution to the more-than-two-nodes is gnbd...

Never heard of it before. Don't like the fact that I need to patch  
the Kernel in order to get it working.


GNDB is a standard part of GFS.  No more patching than GFS or DRBD in
any case...  Red Hat and clones all come with support for GFS and
GNDB built in.  DRBD is another issue...

GNDB should be known to anyone using GFS, since it is part of the standard
reading (manual, etc) for GFS.


If that doesn't meet your needs, then DRBD probably isn't the proper
choice.
You didn't mention anything about number of nodes in your original post,
IIRC.

I did not post the original post. I just responded to the original  
post saying that GlusterFS works for me.


I didn't mean to single you out in my reply...  Assume the you
is a generic you, not specifically aimed at any one individual...

Sorry if I miss-attributed anything to you...  Very busy, and trying
to reply to these emails as fast as I can when I get a minute or two
of time, so I may make some mistakes as to who said what...

I'm not trying to convert or convince any one...  I'm just replying and
expressing my experiences and thoughts...  If glusterfs works for you,
then great.  If not, there are alternatives...  I happen to champion
some, others champion others...

Personally, I like SAN storage, but the price has always kept me from
using it (except once, when I was setting it up on someone else's SAN).

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

Go Longhorns!


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread John Lyons

Dare I ask...(as it's not exactly clear from the Gluster docs)

If I take 5 storage servers to house my /mail can my cluster of 5 front
end dovecot servers all mount/read/write to /mail.

The reason I ask is the docs seem to suggest I should be doing 5
servers, having 5 partitions, one for each mail server?

Any clues?

Regards

John





Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Steve

 Original-Nachricht 
 Datum: Thu, 18 Feb 2010 21:32:46 +
 Von: John Lyons j...@support.nsnoc.com
 An: Dovecot Mailing List dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 
 Dare I ask...(as it's not exactly clear from the Gluster docs)
 
 If I take 5 storage servers to house my /mail can my cluster of 5 front
 end dovecot servers all mount/read/write to /mail.
 
Yes. That's the beauty of GlusterFS.


 The reason I ask is the docs seem to suggest I should be doing 5
 servers, having 5 partitions, one for each mail server?
 
You can do that. But with GlusterFS and Dovecot you don't need to. You can 
mount read/write the same GlusterFS share on all the mail servers. Dovecot will 
usually add the hostname of the delivering system into the maildir file name. 
As long as the delivery is collision free in terms of file names then you can 
scale up as many read/write nodes you like.


 Any clues?
 
 Regards
 
 John
 
Steve 

-- 
NEU: Mit GMX DSL über 1000,- ¿ sparen!
http://portal.gmx.net/de/go/dsl02


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Timo Sirainen
On 19.2.2010, at 0.37, Steve wrote:

 You can do that. But with GlusterFS and Dovecot you don't need to. You can 
 mount read/write the same GlusterFS share on all the mail servers. Dovecot 
 will usually add the hostname of the delivering system into the maildir file 
 name. As long as the delivery is collision free in terms of file names then 
 you can scale up as many read/write nodes you like.

This has the same problems as with NFS (assuming the servers aren't only 
delivering mails, without updating index files). http://wiki.dovecot.org/NFS



Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Steve

 Original-Nachricht 
 Datum: Fri, 19 Feb 2010 03:02:48 +0200
 Von: Timo Sirainen t...@iki.fi
 An: Dovecot Mailing List dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 On 19.2.2010, at 0.37, Steve wrote:
 
  You can do that. But with GlusterFS and Dovecot you don't need to. You
 can mount read/write the same GlusterFS share on all the mail servers.
 Dovecot will usually add the hostname of the delivering system into the 
 maildir
 file name. As long as the delivery is collision free in terms of file names
 then you can scale up as many read/write nodes you like.
 
 This has the same problems as with NFS (assuming the servers aren't only
 delivering mails, without updating index files). http://wiki.dovecot.org/NFS
 
Except that NFS is not so flexible as GlusterFS. In GlusterFS I can replicate, 
stripe, aggregate, etc... All things that I can't do with NFS.
-- 
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser -
jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Timo Sirainen
On Fri, 2010-02-19 at 03:12 +0100, Steve wrote:
  This has the same problems as with NFS (assuming the servers aren't only
  delivering mails, without updating index files). http://wiki.dovecot.org/NFS
  
 Except that NFS is not so flexible as GlusterFS. In GlusterFS I can 
 replicate, stripe, aggregate, etc... All things that I can't do with NFS.

Sure .. but you can break the index files in exactly the same way as
with NFS. :)



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-18 Thread Steve

 Original-Nachricht 
 Datum: Fri, 19 Feb 2010 04:37:04 +0200
 Von: Timo Sirainen t...@iki.fi
 An: dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 On Fri, 2010-02-19 at 03:12 +0100, Steve wrote:
   This has the same problems as with NFS (assuming the servers aren't
 only
   delivering mails, without updating index files).
 http://wiki.dovecot.org/NFS
   
  Except that NFS is not so flexible as GlusterFS. In GlusterFS I can
 replicate, stripe, aggregate, etc... All things that I can't do with NFS.
 
 Sure .. but you can break the index files in exactly the same way as
 with NFS. :)
 
That is right :)
-- 
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser -
jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser


[Dovecot] GlusterFs - Any new progress reports?

2010-02-17 Thread Ed W
GlusterFs always strikes me as being the solution (one day...).  It's 
had a lot of growing pains, but there have been a few on the list had 
success using it already.


Given some time has gone by since I last asked - has anyone got any more 
recent experience with it and how has it worked out with particular 
emphasis on Dovecot maildir storage? How has version 3 worked out for you?


Anyone had success using some other clustered/HA filestore with dovecot 
who can share their experience? (OCFS/GFS over DRBD, etc?)


My interest is more in bootstrapping a more highly available system from 
lower quality (commodity) components than very high end use


Thanks

Ed W


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-17 Thread alex handle

 Anyone had success using some other clustered/HA filestore with dovecot who
 can share their experience? (OCFS/GFS over DRBD, etc?)

 My interest is more in bootstrapping a more highly available system from
 lower quality (commodity) components than very high end use

we use drbd with ext3 in a active/passive setup for more than 1 mailboxes.
works like a charm!

I'm not really trusting cluster filesystems and most cluster
filesystems are not made for small
files.


Alex


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-17 Thread Steve

 Original-Nachricht 
 Datum: Wed, 17 Feb 2010 20:15:30 +0100
 Von: alex handle alex.han...@gmail.com
 An: Dovecot Mailing List dovecot@dovecot.org
 Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?

 
  Anyone had success using some other clustered/HA filestore with dovecot
 who
  can share their experience? (OCFS/GFS over DRBD, etc?)
 
  My interest is more in bootstrapping a more highly available system from
  lower quality (commodity) components than very high end use
 
 we use drbd with ext3 in a active/passive setup for more than 1
 mailboxes.
 works like a charm!
 
 I'm not really trusting cluster filesystems and most cluster
 filesystems are not made for small
 files.
 
I use GlusterFS with Dovecot and it works without issues. The GlusterFS team 
has made huge progress since 2.0 and with the new 3.0 version they have again 
proved that GlusterFS can get better.


 Alex

Steve

-- 
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser -
jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-17 Thread Eric Rostetter

Quoting Ed W li...@wildgooses.com:

Anyone had success using some other clustered/HA filestore with  
dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)


GFS2 over DRBD in an active-active setup works fine IMHO.   Not perfect,
but it was cheap and works well...  Let's me reboot machines with
no downtime which was one of my main goals when implementing it...

My interest is more in bootstrapping a more highly available system  
from lower quality (commodity) components than very high end use


GFS+DRBD should fit the bill...  You need several nics and cables,
but they are dirt cheap...  Just 2 machines with the same disk setup,
and a handful of nics and cables, and you are off and running...


Thanks

Ed W


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

Go Longhorns!


Re: [Dovecot] GlusterFS

2008-08-12 Thread Jeroen Koekkoek
I was afraid somebody was going to say that. Thanks for your reply, I'll
try that sometime later this week. I'll report back how it all went.

Kind regards,
Jeroen Koekkoek

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Aria Stewart
Sent: Monday, August 11, 2008 6:32 PM
To: Dovecot Mailing List
Subject: Re: [Dovecot] GlusterFS


On Aug 11, 2008, at 10:22 AM, Timo Sirainen wrote:

 On Aug 7, 2008, at 3:57 AM, Jeroen Koekkoek wrote:

 I receive the following error message.

 Aug  7 09:38:51 mta2 dovecot: POP3([EMAIL PROTECTED]):
 nfs_flush_fcntl:
 fcntl(/var/vmail/domain.tld/somebody/Maildir/dovecot.index, F_RDLCK)
 failed: Function not implemented

 Dovecot tries to flush kernel's data cache.

You might need

volume plocks
type features/posix-locks
subvolumes posix
end-volume

Or equivalent in your glusterfs configuration




 I think that I can disable mail_nfs_index to fix these messages. Has
 anybody had the same problem, if so, how did you solve it?

 You could disable mail_nfs_index, but that if the same mailbox is  
 accessed concurrently from multiple servers that will probably cause  
 index corruption.


Aria Stewart
[EMAIL PROTECTED]





Re: [Dovecot] GlusterFS

2008-08-11 Thread Pawel Panek

From: Ed W [EMAIL PROTECTED]
Sent: Sunday, August 10, 2008 11:09 AM


I'm also interested to hear how it works out?  It appears that the 
straightline speed is high for gluster, but it's per file performance has 
enough overhead that it's a signficant problem for maildir type 
applications which manipulate lots of small files?  Possibly it works very 
well if you go mbox though?


FUSE kernel driver from 2.6.24 was unusable. Fuse client saw changed file 
modes ie: from 640 to 666. With fuse driver delivered with glusterfs file 
modes were the same as on exporting server. It was performing very well when 
clients were moving some large files, but when it comes to mail traffic wait 
time and system load on client nodes started increasing. At last glusterfs 
stop working due segfault in io-cache.so or libglusterfs.so. Mail nodes were 
using glusterfs-1.3.7 and fuse-2.7.2glfs8, mail was delivered into maildirs


Pawel. 



Re: [Dovecot] GlusterFS

2008-08-11 Thread Timo Sirainen

On Aug 7, 2008, at 3:57 AM, Jeroen Koekkoek wrote:


I receive the following error message.

Aug  7 09:38:51 mta2 dovecot: POP3([EMAIL PROTECTED]):
nfs_flush_fcntl:
fcntl(/var/vmail/domain.tld/somebody/Maildir/dovecot.index, F_RDLCK)
failed: Function not implemented


Dovecot tries to flush kernel's data cache.


I think that I can disable mail_nfs_index to fix these messages. Has
anybody had the same problem, if so, how did you solve it?


You could disable mail_nfs_index, but that if the same mailbox is  
accessed concurrently from multiple servers that will probably cause  
index corruption.




PGP.sig
Description: This is a digitally signed message part


Re: [Dovecot] GlusterFS

2008-08-11 Thread Aria Stewart


On Aug 11, 2008, at 10:22 AM, Timo Sirainen wrote:


On Aug 7, 2008, at 3:57 AM, Jeroen Koekkoek wrote:


I receive the following error message.

Aug  7 09:38:51 mta2 dovecot: POP3([EMAIL PROTECTED]):
nfs_flush_fcntl:
fcntl(/var/vmail/domain.tld/somebody/Maildir/dovecot.index, F_RDLCK)
failed: Function not implemented


Dovecot tries to flush kernel's data cache.


You might need

volume plocks
type features/posix-locks
subvolumes posix
end-volume

Or equivalent in your glusterfs configuration






I think that I can disable mail_nfs_index to fix these messages. Has
anybody had the same problem, if so, how did you solve it?


You could disable mail_nfs_index, but that if the same mailbox is  
accessed concurrently from multiple servers that will probably cause  
index corruption.




Aria Stewart
[EMAIL PROTECTED]





Re: [Dovecot] GlusterFS

2008-08-10 Thread Ed W

Pawel Panek wrote:

We use a Dovecot setup with GlusterFS. Dovecot 1.1.2 and GlusterFS


OT: beside fcntl problem, how is GlusterFS doing for you?
I have some miserable remarks using GlusterFS and FUSE. What about 
your experience?




I'm also interested to hear how it works out?  It appears that the 
straightline speed is high for gluster, but it's per file performance 
has enough overhead that it's a signficant problem for maildir type 
applications which manipulate lots of small files?  Possibly it works 
very well if you go mbox though?


Ed W


Re: [Dovecot] GlusterFS

2008-08-07 Thread Pawel Panek

We use a Dovecot setup with GlusterFS. Dovecot 1.1.2 and GlusterFS


OT: beside fcntl problem, how is GlusterFS doing for you?
I have some miserable remarks using GlusterFS and FUSE. What about your 
experience?


Pawel