Re: Cyrus failover steps

2007-05-13 Thread Tom Samplonius

- Matthew Seaman [EMAIL PROTECTED] wrote:
 On FreeBSD, CARP would be the natural choice, seeing as it's standard
 with
 the system.  All you need is to build a kernel with 'device carp'
 added to
 the config file and to have read the carp(4) manual page.  Then just
 add
 something like:
 
   cloned_interfaces=carp0
   ifconfig_carp0=vhid 100 pass CENSORED 192.168.23.45/24
 
 to /etc/rc.conf on both machines.  Reboot, et voila: the 192.168.23.45
 IP
 will float between the two servers.

  But magical floating could be a problem for Cyrus.  Think of a case where the 
master fails, the IP floats to the replia, some mail is delivered, the master 
comes back up, and the IP moves back.  Oops... the mail delivered to the 
replica is now invisible and effectively lost.

  CARP kinda of loses for application failover, as there is no way to fire a 
script before moving the IP, and before moving the IP back.  It will probably 
grow some scriptability at some point.  But for the time being, CARP is better 
for firewalls and routers than services.  Also, there can be split brain 
issues, where both nodes think they are the master.  Or issues where a node 
will hold onto an IP even though it is actually dead.

  But HeartBeat can do this.

Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Hardware opinion sought - Sun X4500

2007-05-02 Thread Tom Samplonius

- Qin Li [EMAIL PROTECTED] wrote:
 We are thinking of two Sun's Thumper X4500 servers for a fork lifting
 upgrade to our IMAP servers. The x4500 is relatively new but a few
 reports on the Internet gave it a thumb up, particularly on the use
 of
 ZFS and IOPS of RAIDZ. 
 
 Mail is one of the application cited by Sun for such a box. We have
 about 30k users with various size of mailboxes. Any experience or
 opinion regarding the use of such hardware for IMAP server will be
 helpful to us.


  I have not used the x4500 myself for mail, but I would just be carefully 
about IO performance.  You'll have to get more storage than you need, because 
you will need the spindles.  Don't expect to be able to fill the disks with 
mail, because you'll hit the physical limit of IOPS per disk first.

  So order the x4500 with the smallest available disks, and get as many of them 
as your budget allows.  ZFS does a pretty good good of spreading IO to a 
logical disk over all available physical disks, so I don't think you need to 
worry about creating lots of partitions for different aspects of Cyrus (ex. 
metadata, spool, etc).


Tom



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: status=deferred (connect to 172.16.11.20[172.16.11.20]: Connection refused)

2007-02-05 Thread Tom Samplonius

  It does not seem that lmtpd is not listening to a TCP/IP socket.  You should 
check the cyrus.cf config file, and check the listen parameter on the lmtp 
service.  You might have to define a new lmtp service for TCP/IP access.


Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: How do I increase default DBD cache size

2007-01-31 Thread Tom Samplonius
- Ramprasad [EMAIL PROTECTED] wrote:
  No need to patch the code for that - it's an option!  Just put:
  
  berkeley_cachesize: 4096
  
  in your /etc/imapd.conf
  
  We use skiplist for our mailboxes.db and we haven't seen any issues that
  were caused by leaving that at its default value.
  
 
 I need to compile cyrus with skiplist right ? 
 What are the pros and cons of skiplist ? 


   There is some stuff on the Cyrus website about the recommended database 
types for the various things that Cyrus needs to know.  Also, read the 
install-perf document in the Cyrus distribution.

  But I really doubt that berkeley_cachesize affects skiplist in anyway.



  That said:
  
  % cat $confdir/db/DB_CONFIG
  set_cachesize 0 2097152 1
  set_lg_regionmax 2097152
  set_lg_bsize 2097152
  set_lg_max 16777216
  set_tx_max 200
  set_tas_spins 1
  % 
  
 
 
 I dont have this file in my db directory ( /var/lib/imap/db ) 
 Do I have to create this file ?

  I think it is specific to BerkeleyDB, and created by BerkeleyDB when the 
environment is initialized.

  You generally can't configure BerkeleyDB.  The application that creates the 
database sets it up.

..
  NOTE: we're using reiserfs for our drives, and that machine has 42
  partitions split over 8 different raidsets via two separate SCSI
  controllers, and a split meta configuration that puts the meta data
  on RAID1 for speed.  Still, it's possible to get an order of magnitude
  more connections out of that sort of spec.  That's 8Gb of memory in
  that machine, and dual hyperthreading Xeons:
  
 
 Why do you have so many partitions ? And do you run the MTA on the
 same
 machine ? 

  I think at FastMail, they are concerned about filesystem corruption and long 
fsck times on big filesystems.

  I dislike ReiserFS.  If you run it in parallel with any other logging file 
system, you'll have plenty of statistical data to show you are fsck'ing 
ReiserFS more than the other filesystems.


 Thanks
 Ram


Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Running a shell command on update/creation of an item

2007-01-30 Thread Tom Samplonius
- Jochem Meyers [EMAIL PROTECTED] wrote:
 Hi all,
 
 Currently, I'm trying to build an application involving synchronizing
 some
 data stored in a cyrus imap store (as messages with attachments) to an
 sql
 database. To accomplish this, I would like to have cyrus run a shell
 script
 when a item in it's store is updated or created. The items don't
 arrive
 through the MTA, but rather through a direct connection with the IMAP
 server, so using something like procmail is out of the question.
 Updates
 also need to be caught and processed.
 
 If this isn't a possibility, I'll have to write an application that
 checks
 the mailbox status periodically and synchronizes if it finds updates,
 but
 for an improved user experience, I'd rather use the above method.
 
 Thanks in advance for any advice,

  It might be better to use a persistent IMAP connections.  With IDLE support 
you can be informed of new deliveries as they occur.  See idled.  You have to 
look at the code though.  You probably wouldn't get status changes though.

  Sounds like you are working on a ticketing system?

Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Use of NFS via TCP

2007-01-29 Thread Tom Samplonius

- Forrest Aldrich [EMAIL PROTECTED] wrote:
 I know it's recommended not to use Cyrus over NFS.  But what about
 NFSv3 
 or 4 via TCP, instead of UDP?
 
 On a small network, I would imagine this would work alright.  I wonder
 
 if the documentation (caution) is referring to UDP.

  The issue really isn't the protocol used, but the fact that to make NFS fast, 
it compromises the local file semantics a bit.

  The BerkeleyDB documents have a good overview of this problem:

http://www.oracle.com/technology/documentation/berkeley-db/db/ref/env/remote.html



Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Clustering and replication

2007-01-29 Thread Tom Samplonius

- Bron Gondwana [EMAIL PROTECTED] wrote:
 On Fri, Jan 26, 2007 at 12:20:15PM -0800, Tom Samplonius wrote:
  - Wesley Craig [EMAIL PROTECTED] wrote:
   Close.  imapd, pop3d, lmtpd, and other processes write to the log.
   
   The log is read by sync_client.  This merely tells sync_client what  
   (probably) has changed.  sync_client roll up certain log items, e.g., 
   it may decide to compare a whole user's state rather than just  
   looking at multiple mailboxes.  Once it decides what to compare, it  
   retrieves IMAP-like state information from sync_server (running on
  
   the replica) and pushes those changes that are necessary.
  
And this exposes the big weakness with Cyrus syncing:  there is
 only a single sync_client, and it is very easy for it get behind.
 
 Which is why we have the following:
 
 * a logwatcher on imapd.log which emails us on bailing out or other
   unrecognised log lines

  sync_client prints errors from time to time, but most seem harmless.  It 
certainly does not print anything like Exiting..., when it decides to quit.  
I don't really know which log lines are bad, or not.  What do you consider a 
recognized log line?  

  In my case, sync_client quits three to four times a day.

 * the system monitoring scripts do a 'du -s' on the sync directory every
   2 minutes and store the value in a database so our status commands can
   see if any store is behind (the trigger for noticing is 10kb, that's a
   couple of minutes worth of log during the U.S. day).  This also emails
   us if it gets above 100kb (approx 20 mins behind)

  And what do you do if it gets behind?  I have three Cyrus groups right now, 
that are never going to catch up.  They log about 20KB in 20 minutes, so the 
update rate is not that high.  The machines are dedicated, and the replicas 
aren't doing anything.  tcpdump confirms that there is traffic to the replica, 
but the entire sync_client is so opaque it is hard to see what it is doing.  So 
sync_client can't keep up at all, and since it also quits from time to time, it 
gets even worse.

  I'm planning to hack the log, and add some logging to sync_client, 
particularly to find the number of records per second it is able to process.  
And then maybe someway to find why it quits all the time.

  Either that, or my only alternative is to switch to using DRBD to sync the 
filesystem to a standby server.

 * a monitorsync process that runs from cron every 10 minutes and reads
   the contents of the sync directory, comparing any log-(\d+) file's PID
   with running processes to ensure it's actually being run and tries to
   sync_client -r -f the file if there isn't one.  It also checks that
   there is a running sync_client -r process (no -f) for the store.

  Wow, a lot of protection to protect against sync_client just exiting.  
sync_client isn't very big, so it shouldn't be that hard to find the different 
places that it exits, and fix them?

 * a weekly checkreplication script which logs in as each user to both
   the master and replica via IMAP and does a bunch of lists, examines,
   even fetches and compares the output to ensure they are identical.
 
 Between all that, I'm pretty comfortable that replication is correct and
 we'll be told if it isn't.  It's certainly helped us find our share
 of issues with the replication system!

  Well, I know our replicas are out of sync, so we just don't use them.  I just 
hope the master's don't fail.  Each pair has about 30,000 accounts, and about 
300GB of online mail.  

  And it seems like the multiple exit points in sync_client mean that there are 
significant bugs in sync_client still.  And since re-starting sync_client on 
the same sync log appears to work, it means that there is something rather 
wrong there.

Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Clustering and replication

2007-01-29 Thread Tom Samplonius

- Simon Matter [EMAIL PROTECTED] wrote:

 Believe it or not, it works and has been confirmed by several peoples on
 the list using different shared filesystems (VeritasFS and Tru64 comes to
 mind). In one thing you are right, it doesn't work with BerkeleyDB.
 Just switch all your BDB's to skiplist and it works. This has really been
 discussed on the list again and again and isn't it nice to know that
 clustering cyrus that way works?


  Yes, useful.  But the original poster wanted to combine Cyrus application 
replication with a cluster filesystem (GFS specifically).  It seems pretty 
unusual to combine both.  GFS has a lot of locking overhead of writing, and 
e-mail storage is pretty write intensive.  And Cyrus replication can have its 
own performance issues (slow replication that never catches up).  Why do both 
at the same time?

  And GFS 6.1 (current version) has some issues with large directories:

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=214239

I guess this might not be an issue if the GFS cluster only has two nodes (less 
nodes to lock against when creating files).  Cyrus is usually IO bound anyways, 
so you'd probably wouldn't get any additional performance for having more than 
two nodes.


Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Clustering and replication

2007-01-26 Thread Tom Samplonius

- Janne Peltonen [EMAIL PROTECTED] wrote:
 Hi!
 
 As a part of our clustering Cyrus system, we are considering using
 replication to prevent a catastrophe in case the volume used by the
 cluster gets corrupted. (We'll have n nodes each accessing the same GFS,
 and yes, it can be done, see previous threads on the subject.)

  I really doubt this.  Even if GFS works the way it says it does, Cyrus does 
not expect to see other instances modifying the same message, and does not lock 
against itself.

  And if you don't have multiple Cyrus instances accessing the same same 
message store, why use GFS?  GFS adds a lot of inter-node locking overhead.

 Now the workings of the replication code aren't completely clear to me.
 It can do things like collapse multiple mailbox changes into one and so
 on. But is it in some way dependent on there being just one cyrus-master
 controlled group of imapd processes to ensure that all changes to the
 spool (and meta) get replicated? Or does the replication code infer the
 synchronization commands from changes it sees on the spool, independent
 of the ongoing imap connections? That is, do I have to have n replica
 nodes, one for each cluster node? Or don't I?

  The Cyrus master builds a replication log as changes are made by imapd, 
pop3d, and lmtpd.  The log contents are pushed to the replica.  The master and 
replica both have copies of all data, within independent message stores.

  So one replica per master.

 Thanks for any answers


Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Clustering and replication

2007-01-26 Thread Tom Samplonius

- Wesley Craig [EMAIL PROTECTED] wrote:
 On Jan 26, 2007, at 3:07 AM, Tom Samplonius wrote:
  - Janne Peltonen [EMAIL PROTECTED] wrote:
  As a part of our clustering Cyrus system, we are considering using
  replication to prevent a catastrophe in case the volume used by the
  cluster gets corrupted. (We'll have n nodes each accessing the  
  same GFS,
  and yes, it can be done, see previous threads on the subject.)
 
I really doubt this.  Even if GFS works the way it says it does, 
 
  Cyrus does not expect to see other instances modifying the same  
  message, and does not lock against itself.
 
 Yes it does.  How else do you suppose two users reading the same  
 shared mailbox might work?  They aren't all running through one
 imapd.

  Within a single server, this works, but Cyrus is not aware of other imapd's 
that are running on other servers.

  It just isn't the imapd, but the sharing of the database files too.  Can you 
have writers to a BerkeleyDB database on different servers?  I don't think this 
works on NFS, let alone GFS.  Getting mmap() to over GFS is a hard problem, and 
would have to be very slow if it maintained the same semantics as a file on a 
local disk.

  Now the workings of the replication code aren't completely clear  
  to me.
  It can do things like collapse multiple mailbox changes into one  
  and so
  on. But is it in some way dependent on there being just one cyrus-
 
  master
  controlled group of imapd processes to ensure that all changes to the
  spool (and meta) get replicated? Or does the replication code  
  infer the
  synchronization commands from changes it sees on the spool,  
  independent
  of the ongoing imap connections? That is, do I have to have n replica
  nodes, one for each cluster node? Or don't I?
 
The Cyrus master builds a replication log as changes are made by 
  imapd, pop3d, and lmtpd.  The log contents are pushed to the  
  replica.  The master and replica both have copies of all data,  
  within independent message stores.
 
 Close.  imapd, pop3d, lmtpd, and other processes write to the log.   
 The log is read by sync_client.  This merely tells sync_client what  
 (probably) has changed.  sync_client roll up certain log items, e.g., 
 it may decide to compare a whole user's state rather than just  
 looking at multiple mailboxes.  Once it decides what to compare, it  
 retrieves IMAP-like state information from sync_server (running on  
 the replica) and pushes those changes that are necessary.

  And this exposes the big weakness with Cyrus syncing:  there is only a single 
sync_client, and it is very easy for it get behind.

 For your situation, Janne, you might want to explore sharing the sync 
 directory.  sync_client and sync_server have interlock code, tho I  
 haven't reviewed it for this specific scenario.

  Since the sync directory is specific to the master server, why would you 
share it?

  Unless, you want to have multiple Cyrus server all pretend to be the master, 
and log all of their changes to the same sync log.  You would probably hit the 
sync_client bottleneck pretty fast this way.

  Plus, there would be a lot of contention on the sync logs if multiple servers 
are appending records to the same file.  GFS is not fast.

 :wes

Tom

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: High availability email server...

2006-07-29 Thread Tom Samplonius

Pascal Gienger wrote:

David Korpiewski [EMAIL PROTECTED] wrote:

I spent about 6 months fighting with Apple XSAN and Apple OSX mail to 
try
to create a redundant cyrus mail cluster.  First of all, don't try 
it, it

is a waste of time.  Apple states that mail on an XSAN is not supported.
The reason is that it simply won't run.   The Xsan can't handle the 
large

amount of small files and will do things like disconnect or corrupting
the file system.


STOP!
The capability to handle small files efficiently is related to the 
filesystem carrying the files and NOT to the physical and logical 
storage media (block device) under it.
 Apple is the one confusing people.  XSan is the name of the Apple 
cluster file system.  So you configure a couple of hosts on your SAN, 
with a shared volume, and then run XSan.  XSan is more similar to Linux 
GFS (Global File System).


 So I believe the original poster is right:  XSan is crap for lots of 
small files.  That is not surprising.  It is really hard to come up with 
a shared file system that does suck.  The nodes have to lock for 
meta-data updates.  So shared file systems can be pretty slow too.


Tom

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Cyrus Imap refuses to work with vpopmail maildir

2003-06-07 Thread Tom Samplonius

On Sat, 7 Jun 2003, System wrote:

...
  Here i want to mark a point that these mail accounts are created with
  vpopmail and are in MailDir format. How how do i enable the working of my
  mail accounts with Cyrus ?

  You don't.  Cyrus only works with Cyrus mailboxes.  That is the whole
point of Cyrus.  If you want to use MailDir, you should not use Cyrus.


Tom



Re: Cyrus process model...

2003-02-26 Thread Tom Samplonius

  It is always a big pain to update code that was never written to be
threaded, to be thread-safe.  Apache2 has a problems with just about every
third party module supported under Apache 1.3.  I imagine that Cyrus would
have all sorts of thread issues.  There is no magic solution for that.

  Besides, if anyone really wants to take Cyrus to the next generation,
create a new NG branch in CVS (on your own CVS server if necessary), and
start refactoring away.  (Of course, refactoring has to be the most
overused term in software development at the moment, and is touted as a
solution for everything from bad design, to poor management).

Tom

On Tue, 25 Feb 2003, David Lang wrote:

 as someone attempting to get apache 2 running (reliably) in a high volume
 environment I can say the idea is interesting, but I definantly wouldn't
 rush into useing it. if you have some time and want to get a start on
 something that may (or may not) be worth doing in the long run you can
 start on it, but don't stop maintaining the current version, the apache
 core code may not be the right thing in the long run.
 
 David Lang
 
 
  On Wed, 26 Feb 2003, Rob Mueller wrote:
 
  Date: Wed, 26 Feb 2003 16:45:00 +1100
  From: Rob Mueller [EMAIL PROTECTED]
  To: Lawrence Greenfield [EMAIL PROTECTED],
   Rob Siemborski [EMAIL PROTECTED]
  Cc: Ken Murchison [EMAIL PROTECTED],
   info-cyrus [EMAIL PROTECTED]
  Subject: Cyrus process model...
 
  [ Continued from an off mailing list conversation about killing cyrus lmtpd
  processes when they go haywire, and cyrus process accounting ]
 
Surely this is a relatively well solved problem? Just about every unix
system uses this master/forked child approach? How does apache do it?
Net::Server::PreFork? I can't imagine that there aren't cookbook
  solutions
to this issue since it's what unix has been doing for 30 years? Or is
  there
something I'm missing here?
 
   There are many different possibilities. Most other systems limit the
   number of clients instead of forking new processes on demand without a
   set limit. Apache also doesn't have differentiated children or
   substantial shared state. (All children are members of the same
   service or you don't particularly care how many additional unused children
   you have...)
 
  I was under the impression that Apache 2 was planning on making it's
  forking/threading model much more generic, and supporting a general
  'services' model, including a library to abstract the underlying OS? Hmmm,
  looking into that, it appears that it's mostly done already.
 
  http://apr.apache.org/
  http://apr.apache.org/apr2_0intro/apr2_0intro.htm
 
  And more:
 
  Contains following functionality
  -Reading and writing of files
  -Character set conversion
  -Network communications using sockets
  -Time management used for Internet type conversions
  -String management like C++ including natural order management
  -UNIX Password management routines
  -Table management routines
  -UUID Internet generation
  -Filename canonicalization
  -Random data generation
  -Global lock management
  -Threads and process management
  -Dynamic library loading routines
  -Memory mapped and shared memory
 
  -
 
  http://www.arctic.org/~dean/apache/2.0/process-model.html
 
  I think the above is general enough to implement the interesting process
  models, and to implement optimizations that are available only in some of
  the multi-threaded models. Note that nothing above is specific to HTTP, and
  I believe that we should strive to keep the abstraction so that the same
  libraries can be used to implement other types of servers (i.e. FTP,
  streaming video/audio, corba).
 
  -
 
  Would the cyrus team think it worthwhile to consider refactoring to use the
  new Apache 2 APR modules? I know off hand that it would be a lot of work,
  but it could be a gradual re-factoring process, and the idea of actually
  reusing code between projects would be *really* nice.
 
  Joel Spolsky is a big proponent of refactoring over time to improve software
  and you can read some of his thoughts here.
 
  http://www.joelonsoftware.com/articles/fog69.html
  http://www.joelonsoftware.com/news/fog000328.html
 
  Ooops, I'm feeling a rant come along...
 
  *** RANT MODE ***
 
  I know this is a little off topic, but the source for cyrus is really
  showing it's age a bit. I know that happens with all software, you start
  with certain assumptions, and the more you go on, the more the original
  assumptions get blown away, so you hack this in here, and there, and then
  every now and then, you go on a big cleanup spree! The problem I feel is
  that the cleanup hasn't been big enough or often enough.
 
  Also, over time programming habits change. Many old C idioms are pretty much
  dead. Most of the C string handling methods are now annoying, or downright
  dangerous. There are several dozen replacement libraries, including the APR
  one above, and good 

Re: {Resend} Failed opening 'D.B.php' (fwd)

2003-01-05 Thread Tom Samplonius

On Sat, 4 Jan 2003, Yann Debonne wrote:

 Hi,
 
 Upon logging into web-cyradm I get the following error:
 
 --
 
 Warning: Failed opening 'DB.php' for inclusion
 (include_path='.:/usr/local/lib/php/PEAR') in
 /home/www/web-cyradm/auth.inc.php on line 12
 
 Fatal error: Undefined class name 'db' in /home/www/web-cyradm/auth.inc.php
 on line 16
...

  Is DB.php readable by your web server, and in a directory accessible by
your web server?

Tom




Re: Digest and Subject marker

2002-05-02 Thread Tom Samplonius


On Wed, 1 May 2002, Mathias Koerber wrote:

...
 This will allow easy identification *and* filtering on the subject.
 When I asked the list-owner on this, it was suggested that I´d run this
...

  There are lot of headers that identify the source already, without
adding junk to the Subject line too.  I sort all the info-cyrus into a
separate Cyrus folder with an Exim filter.

Tom




Re: cyrus imapd 2.1.3

2002-03-30 Thread Tom Samplonius


On Sat, 30 Mar 2002, Andreas Meyer wrote:

 * OK delta Cyrus IMAP4 v2.1.3 server ready
 logout
 * BAD Invalid tag
...
 The IMAP-book I have talks about version 1.6 and before and
 I read I could use 'logout' to end the telnet-session, but
 that doesn´t exit the session.

  It is . logout, not logout.

Tom




Re: single instance store

2001-03-31 Thread Tom Samplonius


On Sat, 31 Mar 2001, Cillian Sharkey wrote:

 Should single instance store work in the following scenario:
 
 A message is addressed to two accounts. It is delivered to one successfully
 but the other a/c is over quota so delivery fails and the message remains in
 the mail queue. The quota limit is then bumped up for this a/c, the MTA
 retries delivery and succeeds.
...

  Cyrus sees it as two separate deliveries.  It can't tell it happens to
be the same message that it delivered before.

Tom




Re: Period in mailbox names (again)

2001-03-29 Thread Tom Samplonius


On Mon, 26 Mar 2001, Joel M. Baldwin wrote:

 
 How does this work of you want to create a folder
 with a '/' in the name?
 
 Sounds like we've still got a reserved character that can't
 be in folder names.

  Something needs to be the hierarchy separator.

Tom




Re: Disabling quota for user

2001-02-12 Thread Tom Samplonius


On Mon, 12 Feb 2001 [EMAIL PROTECTED] wrote:

 When I create a mailbox, quota is disabled.
 Then I set quota for that mailbox to an integer value.
 How I can disable quota anymore? Setting quota to 0 is not right.
 Thx
 Gianluigi Tiesi


  Set quota to "none" works.  Setting it to "-1" might also work.

Tom




Re: how do you debug deliver?

2000-12-13 Thread Tom Samplonius


On Wed, 13 Dec 2000, Marc G. Fournier wrote:

...
 now, the "problem email" is in the mqueue, which, of course, isn't in a
 format that I can 'cat' into deliver ... any suggestions?

  Easy enough.  deliver just wants an message in RFC-822 format.  You can
easily chop up a sendmail queue file in an editor.

Tom




Re: Cyrus and NFS

2000-11-15 Thread Tom Samplonius


On Wed, 15 Nov 2000 [EMAIL PROTECTED] wrote:

 Hi,
 
 Has anybody got Cyrus IMAP working with an NFS mounted Network Appliance Filer 
successfully. We are using Cyrus under Linux. 
 The mailboxes file is stored on the NetApp and appears to go offline. The only 
solution is to remount the NFS device. Is this a Cyrus or an NFS problem. And are 
there any other known issues using Cyrus with NFS. 

  It won't work.  First there are locking issues.  Given a good client and
server implementation of NFS that can likely be overcome.  The quality of
Linux NFS is determined by which kernel you are using.  The locking issue
can be avoided by the using only a single Cyrus server to access the
message store over NFS.

  Then there is the use of mmap(), which probably isn't possible to
overcome.  For instance, Solaris NFS clients refuse to mmap() files on a
NFS server.  On implementations that allow mmap() over NFS, don't expect
it to work in the consistant way that local mmap() with a unified buffer
cache should work.

 Regards
 Pat Fenton

Tom