Looks like the homegrown code is the problem. Using DBServerInfo as
provided doesn't reproduce this memory usage. It may be that using
sqlalchemy directly to generate a cursor is doing something with all
of the tables in a database...

Kenny

On Feb 22, 5:49 pm, Christopher Lee <l...@chem.ucla.edu> wrote:
> On Feb 22, 2010, at 5:37 PM, Kenny Daily wrote:
>
> > from pygr import worldbase
> > all_resources = worldbase.dir("Bio.Test.Annotation")
> > dbs = filter(lambda x: x.endswith("db"), all_resources)
>
> > for x in dbs:
> >    resource = worldbase(x)
> >    resource.close()
>
> > This uses ~50-100MB per resource loaded, and it accumulates. In my
> > case, the AnnotationDB's sliceDB is a MySQL table. Any ideas about
> > where the memory could be going?
>
> Our XMLRPC server loads all the AnnotationDB that it serves (must be at least 
> 100 - 200) with little memory usage, so this horrible memory usage should 
> absolutely not be happening to you.  If you tested this with beta1 code, 
> please re-run your example with an official release i.e. 0.8.0 or 0.8.1.  I 
> suspect that this may solve your problem -- as I recall we discovered a nasty 
> bug in MySQLdb that uses up massive amounts of memory when you simply ask for 
> an iterator.  Starting with 0.8.0 we included a workaround to avoid that 
> problem.

-- 
You received this message because you are subscribed to the Google Groups 
"pygr-dev" group.
To post to this group, send email to pygr-...@googlegroups.com.
To unsubscribe from this group, send email to 
pygr-dev+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/pygr-dev?hl=en.

Reply via email to