Re: [Dovecot] Keeping \Seen flag private

2008-06-28 Thread Timo Sirainen
On Fri, 2008-06-27 at 15:09 +0100, Imobach González Sosa wrote:
 Hi all,
 
 I wanna to set up shared folders for a couple of users and I'd like that 
 everyone could keep the \Seen flag as private. So if user #1 reads some 
 messages and user #2 not, those messages appear as unseen to #2 and seen 
 to #1.
 
 I've implemented shared folders using namespaces with every user having their 
 own control and private directories. But all the flags (\Seen included) 
 are shared.
 
 Am I on the right path? Some tip or documentation?

I updated http://wiki.dovecot.org/SharedMailboxes now to mention flag
sharing.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Upgrade to 1.1 maildir shared folders problems

2008-06-28 Thread Timo Sirainen
On Fri, 2008-06-27 at 10:50 +0300, [EMAIL PROTECTED] wrote:
 Hello,
 
 We recently upgraded to Dovecot 1.1 from 1.0.5 and we are having few
 issues:
 
 1) Maildir's are not created anymore for new users if they dont excits,
 the directories where created before, is this a configuration issue?

I did change the code so that the maildir creation is delayed until it's
really needed. But I don't know of any bugs in this, and I can't
reproduce your problem. Could you show the error messages from the log
file you get when a new user tries to access the mailbox?

 2) Cannot subscribe to shared folders, if currently subscribed they work
 fine but cannot re-subscribe, also i noticed that the control directory
 for the namespace is not created anymore, before it was, any ideas?

Again the control directory creation is delayed until it's needed. So
that might not be the actual problem. Are there anything in error logs
when this happens?

Also could you try manually so you can see what exactly Dovecot replies
to the subscribe commands? Try something like:

telnet localhost 143
a login username password
b unsubscribe shared/some-mailbox
c subscribe shared/some-mailbox

What are the replies?


signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Dovecot corrupted index cache

2008-06-28 Thread Timo Sirainen
On Wed, 2008-06-25 at 09:40 -0700, Michael D. Godfrey wrote:
  A guess would be that this is likely due to the endianess of the 
  multiple architectures that the index is being accessed with. We have 
  the same issue here across i686/x86_64/sparc. I'm about to post to an 
  older email thread about this as well. 
 This is a good guess.  We use a mixture of i386 and x86_64.  This is not 
 an endedness conflict, but
 could be the problem for other reasons.

So you use NFS?



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Dovecot index, NFS, and multiple architectures

2008-06-28 Thread Timo Sirainen
On Wed, 2008-06-25 at 12:00 -0400, David Halik wrote:
 I just reproduced the environment and the index corrupted immediately 
 across NFS because of the endian issue.
 
 Jun 25 11:53:34 host IMAP(user): : Rebuilding index file 
 /dovecot-index/index/user/.INBOX/dovecot.index: CPU architecture changed
 Jun 25 11:53:35 host IMAP(user): : Corrupted index cache file 
 /dovecot-index/index/user/.INBOX/dovecot.index.cache: field header 
 points outside file

I'll check later if I can reproduce this.

 This was starting from a clean index, first opening pine on the NFS 
 Solaris 9 sparc machine, and then at the same time opening pine on my 
 Fedora 9 i386 workstation.

Why does it matter where you run Pine? Does it directly execute Dovecot
on the local machine instead of connecting via TCP?

 I'm going to try the idea of splitting the indexes into two different 
 architectures, but I'm worried that this will not be feasible when we 
 try to scale to our 80,000 users.

I'd suggest not running Dovecot on different architectures. Like if
you're on a non-x86 make it connect via TCP to a x86 Dovecot server.

 By the way, I don't think this is related to the corruption, but we also 
 have tons of these in the logs:
 
 Jun 25 11:52:32 host IMAP(user): : Created dotlock file's timestamp is 
 different than current time (1214409234 vs 1214409152): 
 /dovecot-index/control/user/.INBOX/dovecot-uidlist
 Jun 25 11:52:32 host IMAP(user): : Created dotlock file's timestamp is 
 different than current time (1214409235 vs 1214409152): 
 /dovecot-index/control/user/.INBOX/dovecot-uidlist

Dovecot really wants that clocks are synchronized between the NFS
clients and the server. If the clock difference is more than 1 second,
you'll get problems.


signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Keeping \Seen flag private

2008-06-28 Thread Imobach González Sosa
El Friday 27 June 2008 21:08:30 Asheesh Laroia escribió:
 On Fri, 27 Jun 2008, Imobach González Sosa wrote:
  There's no problem in disallowing the users to update the Seen flag.
  What I want is that every user have their own Seen flags.

 Timo and others will know more; but for me, I would just deliver the
 messages to multiple people and see that Dovecot deliver will use
 hardlinks for the multiple deliveries - that way, the message flags in the
 filename are still canonical.

Yes, it could be a solution. But our user's requirements are a bit of... 
strange? They want one of them to manage the folder (with subfolders) and the 
rest of them only can read messages.

Thanks anyway for your suggestion!

-- 
Imobach González Sosa
Banot.net
http://www.banot.net/


Re: [Dovecot] Keeping \Seen flag private

2008-06-28 Thread Imobach González Sosa
El Saturday 28 June 2008 07:25:31 Timo Sirainen escribió:
 On Fri, 2008-06-27 at 15:09 +0100, Imobach González Sosa wrote:
  Hi all,
 
  I wanna to set up shared folders for a couple of users and I'd like that
  everyone could keep the \Seen flag as private. So if user #1 reads some
  messages and user #2 not, those messages appear as unseen to #2 and
  seen to #1.
 
  I've implemented shared folders using namespaces with every user having
  their own control and private directories. But all the flags (\Seen
  included) are shared.
 
  Am I on the right path? Some tip or documentation?

 I updated http://wiki.dovecot.org/SharedMailboxes now to mention flag
 sharing.

Ah, great! Thank you very much, Timo!

-- 
Imobach González Sosa
Banot.net
http://www.banot.net/


Re: [Dovecot] courier IMAP to dovecot migration: folders not showing up

2008-06-28 Thread Charles Marcus

On 6/27/2008 8:27 PM, Jacob Yocom-Piatt wrote:

any clues on how to fix this issue would be welcome.


It will probably be helpful to provide output of dovecot -n so we can 
see your config... ?


--

Best regards,

Charles


Re: [Dovecot] Dovecot index, NFS, and multiple architectures

2008-06-28 Thread David Halik





This was starting from a clean index, first opening pine on the NFS
Solaris 9 sparc machine, and then at the same time opening pine on my
Fedora 9 i386 workstation.


Why does it matter where you run Pine? Does it directly execute Dovecot
on the local machine instead of connecting via TCP?



Correct. We have dovecot executing locally in each instance, with the 
index being shared. I'll try the TCP method and get back to you. By the 
way, the only reason I'm specifically doing it this way to test out what 
might possibly happen to our user group.


We have approximately 50,000 student accounts, and 20,000 staff accounts 
that all access mail in multiple fashions. We want to be able to roll out 
dovecot everywhere, but to do this it has to be resiliant enough to handle 
multiple instances of dovecot on multiple architectures. For example, a 
student logs into a webmail machine (sparc) and then ssh's into a linux 
frontend server and opens pine at the same time. This scenerio isn't 
likely to happen, but it could. We're just trying to cover all 
possibilities. Hence why I we're running the local dovecot/pine and the 
server side dovecot/pine... trying to see how it holds up.


So far it's been great minus the endianess issue. By the way, we're trying 
out seperating the index by arch and it's working pretty good right now. 
The only concern is how it's going to scale with regards to disk usage if 
we have double the number of indexs per account. We figure max of 10MB per 
index multiplied by 2, multiplied by 70,000... not a small number at all, 
but that's for us to worry about. ;) Of course, that is a worse case 
scenerio.




I'd suggest not running Dovecot on different architectures. Like if
you're on a non-x86 make it connect via TCP to a x86 Dovecot server.



I'm going to try that out and get back to you.



By the way, I don't think this is related to the corruption, but we also
have tons of these in the logs:

Jun 25 11:52:32 host IMAP(user): : Created dotlock file's timestamp is
different than current time (1214409234 vs 1214409152):
/dovecot-index/control/user/.INBOX/dovecot-uidlist
Jun 25 11:52:32 host IMAP(user): : Created dotlock file's timestamp is
different than current time (1214409235 vs 1214409152):
/dovecot-index/control/user/.INBOX/dovecot-uidlist


Dovecot really wants that clocks are synchronized between the NFS
clients and the server. If the clock difference is more than 1 second,
you'll get problems.



I figured. Looks like we need to be a little more strict with ntp. ;)


Re: [Dovecot] Dovecot corrupted index cache

2008-06-28 Thread Michael D Godfrey


On Wed, 2008-06-25 at 09:40 -0700, Michael D. Godfrey wrote:
  
  A guess would be that this is likely due to the endianess of the 
  multiple architectures that the index is being accessed with. We have 
  the same issue here across i686/x86_64/sparc. I'm about to post to an 
  older email thread about this as well. 
  
 This is a good guess.  We use a mixture of i386 and x86_64.  This is not 
 an endedness conflict, but

 could be the problem for other reasons.



So you use NFS?

  

Clients mount off of the server, but dovecot (IMAP) connections are made
directly.  Home directories are not mounted, so I do not think NFS can be
affecting  dovecot.  I think the problem is due to 2 users using the 
same account
making simultaneous access.  It could be that the users need to be on 
machines

with differing architectures (i386 and x86_64 in our case).

Anything more I can tell you?

Michael




Re: [Dovecot] Dovecot corrupted index cache

2008-06-28 Thread Timo Sirainen
On Sat, 2008-06-28 at 10:00 -0700, Michael D Godfrey wrote:
 
  On Wed, 2008-06-25 at 09:40 -0700, Michael D. Godfrey wrote:

A guess would be that this is likely due to the endianess of the 
multiple architectures that the index is being accessed with. We have 
the same issue here across i686/x86_64/sparc. I'm about to post to an 
older email thread about this as well. 

   This is a good guess.  We use a mixture of i386 and x86_64.  This is not 
   an endedness conflict, but
   could be the problem for other reasons.
  
 
  So you use NFS?
 

 Clients mount off of the server, but dovecot (IMAP) connections are made
 directly.  Home directories are not mounted, so I do not think NFS can be
 affecting  dovecot.  I think the problem is due to 2 users using the 
 same account
 making simultaneous access.  It could be that the users need to be on 
 machines
 with differing architectures (i386 and x86_64 in our case).

I don't really understand. If there is no NFS, then that means you have
only one Dovecot server. So how can one Dovecot server be both i386 and
x86_64? Or if you mean the client machines are i386/x86-64, Dovecot
doesn't even know about them so that doesn't matter.



signature.asc
Description: This is a digitally signed message part


[Dovecot] fd limit 1024 is lower in dovecot-1.1.1

2008-06-28 Thread Zhang Huangbin

Hi, all.

I just upgrade from 1.0.15 to 1.1.1 in a test box(RHEL 5.2, x86_64).

after upgrade, i got this warning msg:

8 
# /etc/init.d/dovecot restart
Stopping Dovecot Imap: [  OK  ]
Starting Dovecot Imap: Warning: fd limit 1024 is lower than what Dovecot 
can use under full load (more than 1280). Either grow the limit or 
change login_max_processes_count and max_mail_processes settings

  [  OK  ]
8 

but i changed either login_max_processes_count and max_mail_processes
to 2048, it raised the same msg. How can i solove this issue?

Thanks very much.

My dovecot -n output:

8
# dovecot -n
# 1.1.1: /etc/dovecot.conf
Warning: fd limit 1024 is lower than what Dovecot can use under full 
load (more than 1280). Either grow the limit or change 
login_max_processes_count and max_mail_processes settings

log_path: /var/log/dovecot.log
protocols: pop3 pop3s imap imaps
listen: *
ssl_cert_file: /etc/pki/dovecot/certs/dovecotCert.pem
ssl_key_file: /etc/pki/dovecot/private/dovecotKey.pem
login_dir: /var/run/dovecot/login
login_executable(default): /usr/libexec/dovecot/imap-login
login_executable(imap): /usr/libexec/dovecot/imap-login
login_executable(pop3): /usr/libexec/dovecot/pop3-login
max_mail_processes: 1024
mail_uid: 2000
mail_gid: 2000
mail_location: maildir:/%Lh/%Ld/%Ln/:INDEX=/%Lh/%Ld/%Ln/
mail_executable(default): /usr/libexec/dovecot/imap
mail_executable(imap): /usr/libexec/dovecot/imap
mail_executable(pop3): /usr/libexec/dovecot/pop3
mail_plugins(default): quota imap_quota
mail_plugins(imap): quota imap_quota
mail_plugins(pop3): quota
mail_plugin_dir(default): /usr/lib64/dovecot/imap
mail_plugin_dir(imap): /usr/lib64/dovecot/imap
mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
auth default:
 mechanisms: plain login
 user: vmail
 passdb:
   driver: sql
   args: /etc/dovecot-mysql.conf
 userdb:
   driver: sql
   args: /etc/dovecot-mysql.conf
 socket:
   type: listen
   client:
 path: /var/spool/postfix/private/auth
 mode: 432
 user: postfix
 group: postfix
   master:
 path: /var/run/dovecot/auth-master
 mode: 432
 user: vmail
 group: vmail

--
Best Regards.

Zhang Huangbin

- Mail Server Solution for Red Hat(R) Enterprise Linux  CentOS 5.x:
 http://rhms.googlecode.com/