Re: [Mailman-Users] Re-creating lost mailman archives from GMail account using Gmail API

2005-07-11 Thread Brian Greenberg
On 7/11/05, Jim Tittsler [EMAIL PROTECTED] wrote:
 
 On Jul 11, 2005, at 19:42, Alias wrote:
 
  Now - without getting too much into the details of how the python side
  of things would work, does anyone know how I would go about
  re-creating the archives? [...] Or, better still, is there an existing
  script/utility/commandline switch that I could just run on a directory
  of email files?
 
 The ~mailman/bin/arch script will rebuild the archives given the
 messages in a standard Unix mbox format file.  (Run 'bin/arch --help'
 for more info.  It will explain that typically the mbox will be in
 the archives/private/listname.mbox/listname.mbox )

And you can retrieve all the posts from Gmail via POP.

 http://mail.google.com/support/bin/answer.py?answer=12103

Once in a local client, you can select the relevant messages and save
them in mbox format.


Brian.
-- 
Brian Greenberg
[EMAIL PROTECTED]
--
Mailman-Users mailing list
Mailman-Users@python.org
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/
Unsubscribe: 
http://mail.python.org/mailman/options/mailman-users/archive%40jab.org

Security Policy: 
http://www.python.org/cgi-bin/faqw-mm.py?req=showamp;file=faq01.027.htp


[Mailman-Users] LockFile.py problems + patch.

2004-10-12 Thread Brian Greenberg
I've been getting periodic entries in .../mailman/logs/locks that show:

Oct 08 08:33:50 2004 (6969) listname.lock unexpected linkcount: -1
Oct 08 08:33:50 2004 (6969) listname.lock lifetime has expired, breaking

Lots of error messages, but no apparent problems with list delivery. 
I probably would not have noticed but for an oops that tried to
gateway 30,000+ news messages in to a test list.  This flooded the log
nicely, and caught my attention

Final analysis is that while waiting for a lock to be freed, a waiting
process enter a race condition when the holding process releases the
lock, and the result is that the non-existant lock file is checked for
it's link count (__linkcount returns -1), and then has it's lifetime
checked (__releasetime() returns -1, which results in an expired
lifetime).

The patch:

-
*** mailman-2.1.5/Mailman/LockFile.py   Mon Mar 31 22:28:16 2003
--- LockFile.py Tue Oct 12 14:05:21 2004
***
*** 264,269 
--- 264,271 
  # The link failed for some reason, possibly because someone
  # else already has the lock (i.e. we got an EEXIST), or for
  # some other bizarre reason.
+   self.__writelog ('Link attempt failed.  OSError is %s' % 
+os.strerror(e.errno))
  if e.errno == errno.ENOENT:
  # TBD: in some Linux environments, it is possible to get
  # an ENOENT, which is truly strange, because this means
***
*** 283,290 
  elif self.__linkcount()  2:
  # Somebody's messin' with us!  Log this, and try again
  # later.  TBD: should we raise an exception?
  self.__writelog('unexpected linkcount: %d' %
! self.__linkcount(), important=True)
  elif self.__read() == self.__tmpfname:
  # It was us that already had the link.
  self.__writelog('already locked')
--- 285,297 
  elif self.__linkcount()  2:
  # Somebody's messin' with us!  Log this, and try again
  # later.  TBD: should we raise an exception?
+ links = self.__linkcount() 
+ if links == -1:  # The lock was cleared already!
+ self.__writelog(
+ 'No lockfile after a lockfile exists error?')
+ continue   
  self.__writelog('unexpected linkcount: %d' %
! links, important=True) 
  elif self.__read() == self.__tmpfname:
  # It was us that already had the link.
  self.__writelog('already locked')
***
*** 299,305 
  raise TimeOutError
  # Okay, we haven't timed out, but we didn't get the lock.  Let's
  # find if the lock lifetime has expired.
! if time.time()  self.__releasetime() + CLOCK_SLOP:
  # Yes, so break the lock.
  self.__break()
  self.__writelog('lifetime has expired, breaking',
--- 306,317 
  raise TimeOutError
  # Okay, we haven't timed out, but we didn't get the lock.  Let's
  # find if the lock lifetime has expired.
! rel_time = self.__releasetime()
! if (rel_time == -1):  # Lock does not exist anymore?
! self.__writelog(
! 'Checked the release time of a non-existant lock.')
! continue
! elif time.time()  rel_time + CLOCK_SLOP:
  # Yes, so break the lock.
  self.__break()
  self.__writelog('lifetime has expired, breaking',

--
Brian.
-- 
Brian Greenberg
[EMAIL PROTECTED]
--
Mailman-Users mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/

Re: Fwd: Re: [Mailman-Users] OutgoingRunner Failing

2004-09-14 Thread Brian Greenberg
David Richards wrote:
Hi Brian,
I have seen the same problem as you described in your post, I was running a 
shared installation via NFS.  With OutgoingRunner's running on each box.

Would you expect this to have the same results?
While the idea of running MM on multiple machines in itself kind of makes 
my head hurt, I'm pretty sure that you're seeing a similar if not the same 
problem that I encountered.  The logs you've posted look identical.

From what I could tell from the code, if you are running more instance of 
OutgoingRunner (or any qrunner, for that matter), you *will* have regular 
crashes.  This is because (assuming 4 slices, numbered 0 through 3) each 
slice should manage 1/4 of the queue hash space.  However, as coded, slice 
0 will grab files from the *entire* queue, not just the first quarter. 
This results in a race condition.  The qrunner crash is a result of both 
slice 0 and another slice seeing a file in the last 3/4 of the hash space, 
and both beginning to process it -- one will finish and erase the file, the 
other slice will crash.

Try making the following change:
In mailman-2.1.5/Mailman/Queue/Switchboard.py, change line 167 from
if not lower or (lower = long(digest, 16)  upper):
to
if (lower == upper) or (lower = long(digest, 16)  upper):
This completely eliminated my problem.
Brian.
Date: Tue, 14 Sep 2004 14:25:42 +0900
From: Jim Tittsler [EMAIL PROTECTED]  
Subject: Re: [Mailman-Users] OutgoingRunner Failing  
To: David Richards [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]

On Sep 14, 2004, at 09:04, David Richards wrote:

I have the OutgoingRunner process failing quite regularly, and this has
resulted in a build up of mail in the qfiles/out directory.  How do I 
find out
what is going on in this process for it to be failing like this?
Are there any clues in your logs/error log?  If you are lucky, there 
will be a traceback showing why OutgoingRunner is crashing.

(Have you configured Mailman to run with multiple OutgoingRunners in 
your mm_cfg file?  If so, check for Brian Greenberg's recent problem 
report and fix.)
--
+++
+  Brian Greenberg   + University of Manitoba +
+   [EMAIL PROTECTED]   +   ACN -- Unix Software Admin   +
+-+
+ Tasklist and PGP key at http://home.cc.umanitoba.ca/~grnbrg +


signature.asc
Description: OpenPGP digital signature
--
Mailman-Users mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/

[Mailman-Users] Patch: qrunner w/ multiple slices crashing.

2004-08-19 Thread Brian Greenberg
I was having problems with OutgoingRunner repeatedly crashing with 
messages like:

Aug 13 15:27:51 2004 qrunner(23829): IOError :  [Errno 2] No such file 
or directory: 
'/var/priv/mail/mailman/qfiles/out/1092428866.8410051+70dcb0bb96e6460d8cd2aa8103cce318cfa3ed1f.pck'

I believe I have traced the problem to 
mailman/Mailman/Queues/Switchboard.py.

Detailed description at:
http://sourceforge.net/tracker/index.php?func=detailaid=1008983group_id=103atid=300103
Brian Greenberg.
Patch:
*** Switchboard.py  Fri Aug 13 16:43:12 2004
--- Switchboard.py_new  Fri Aug 13 16:43:48 2004
***
*** 164,170 
  when, digest = filebase.split('+')
  # Throw out any files which don't match our bitrange. 
BAW: test
  # performance and end-cases of this algorithm.
! if not lower or (lower = long(digest, 16)  upper):
  times[float(when)] = filebase
  # FIFO sort
  keys = times.keys()
--- 164,170 
  when, digest = filebase.split('+')
  # Throw out any files which don't match our bitrange. 
BAW: test
  # performance and end-cases of this algorithm.
! if (lower == upper) or (lower = long(digest, 16)  upper):
  times[float(when)] = filebase
  # FIFO sort
  keys = times.keys()

--
+++
+  Brian Greenberg   + University of Manitoba +
+   [EMAIL PROTECTED]   +   ACN -- Unix Software Admin   +
+-+
+ Tasklist and PGP key at http://home.cc.umanitoba.ca/~grnbrg +
--
Mailman-Users mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/


[Mailman-Users] Fix: OutgoingRunner qrunner crash with multiple slices enabled.

2004-08-16 Thread Brian Greenberg
Mailman 2.1.5 on Solaris 8, with Python 2.3.3.

I was getting the following errors in logs/error and logs/qrunner:

error:

Aug 13 15:16:53 2004 qrunner(7657): Traceback (most recent call last):
Aug 13 15:16:53 2004 qrunner(7657):   File
/usr/local/mailman/bin/qrunner, line 270, in ?
Aug 13 15:16:53 2004 qrunner(7657):  main()
Aug 13 15:16:53 2004 qrunner(7657):   File
/usr/local/mailman/bin/qrunner, line 230, in main
Aug 13 15:16:53 2004 qrunner(7657):  qrunner.run()
Aug 13 15:16:53 2004 qrunner(7657):   File
/usr/local/mailman/Mailman/Queue/Runner.py, line 70, in run
Aug 13 15:16:53 2004 qrunner(7657):  filecnt = self._oneloop()
Aug 13 15:16:53 2004 qrunner(7657):   File
/usr/local/mailman/Mailman/Queue/Runner.py, line 99, in _oneloop
Aug 13 15:16:53 2004 qrunner(7657):  msg, msgdata =
self._switchboard.dequeue(filebase)
Aug 13 15:16:53 2004 qrunner(7657):   File
/usr/local/mailman/Mailman/Queue/Switchboard.py, line 144, in
dequeue
Aug 13 15:16:53 2004 qrunner(7657):  os.unlink(filename)
Aug 13 15:16:53 2004 qrunner(7657): OSError :  [Errno 2] No such file
or directory: 
'/var/priv/mail/mailman/qfiles/out/1092428211.4786341+bad1265375ae36cc455fc7e521e9c39c09a29558.pck'

qrunner:

 Aug 13 15:16:53 2004 (29188) Master qrunner detected subprocess exit
(pid: 7657, sig: None, sts: 1, class: OutgoingRunner, slice: 3/4)
[restarting]
Aug 13 15:16:54 2004 (7005) OutgoingRunner qrunner started.

with 

Aug 08 05:35:34 2004 (716) Qrunner OutgoingRunner reached maximum restart limit 
of 10, not restarting.

showing up eventually, followed by mail building up in the outgoing queue.

I was running four OutgoingRunner instances, set in mm_cfg.py with:

QRUNNERS = [
('ArchRunner', 1), # messages for the archiver
('BounceRunner',   1), # for processing the qfile/bounces directory
('CommandRunner',  1), # commands and bounces from the outside world
('IncomingRunner', 1), # posts from the outside world
('NewsRunner', 1), # outgoing messages to the nntpd
('OutgoingRunner', 4), # outgoing messages to the smtpd
('VirginRunner',   1), # internally crafted (virgin birth) messages
('RetryRunner',1), # retry temporarily failed deliveries
]

The problem is a logic error in mailman/Mailman/Queue/Switchboard.py,
and is fixed with a one-line patch.

The problem is:

In Switchboard.py:__init__, the upper and lower bounds (self.__upper
and self.__lower) are both set to None if there is only a single
instance of the qrunner class in question, and to the correct upper
and lower bounds of each subslice if there is more than one.

In Switchboard.py:files (which returns a list of all files in the
queue directory that this qrunner instance is to process) the
statement that rejects files that are not within the bounds of this
qrunner instance has a logic error.  The line in question:

 if not lower or (lower = long(digest, 16)  upper ) :

can be read as If this is a single instance qrunner (because lower is
set to None, and therefore false) or if the file is within the upper
and lower bounds of this instance, add it to the list of files.  The
problem is that the first slice of any multi-slice qrunner has a lower
bound of 0, and (not 0) evaluates as true.

This results in slice 0 of any multi-slice qrunner trying to grab
files from the entire queue, rather than it's assigned portion,
resulting in a race condition and the crash of one of the qrunners
when slice 0 and slice n try to process the same file at the same
time.

Patch:

*** Switchboard.py  Fri Aug 13 16:43:12 2004
--- Switchboard.py_new  Fri Aug 13 16:43:48 2004
***
*** 164,170 
  when, digest = filebase.split('+')
  # Throw out any files which don't match our bitrange.  BAW: test
  # performance and end-cases of this algorithm.
! if not lower or (lower = long(digest, 16)  upper):
  times[float(when)] = filebase
  # FIFO sort
  keys = times.keys()
--- 164,170 
  when, digest = filebase.split('+')
  # Throw out any files which don't match our bitrange.  BAW: test
  # performance and end-cases of this algorithm.
! if (lower == upper) or (lower = long(digest, 16)  upper):
  times[float(when)] = filebase
  # FIFO sort
  keys = times.keys()


Thanks!

Brian.
-- 
Brian Greenberg
[EMAIL PROTECTED]
--
Mailman-Users mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://www.python.org/cgi-bin/faqw-mm.py
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/