try cuttin piece by piece until it stops growing. all regardless if 
app does correctly work for the user - this is to just find the hole.
 a) what i suggested for reducing one 100000 row query into 1000 small 
(same) ones - to see if its because of orm.query or else
 b) from the workflow below, cut things one by one until the mem stops 
growing - e.g. does it grow?
  - if no query
  - if no eagerloading
  - if 100000 objects are made by you (not by query)
  - ... whatever u guess

On Wednesday 18 June 2008 17:40:14 Arun Kumar PG wrote:
> thanks for the instant reply guys!
>
> as my app is on production so i cannot afford to bring things down
> right away for 0.4/0.5 migration. eventually, i will be going to
> (in next month) use 0.4/0.5. so for the time being (at least for
> the next one month) i am looking for the best solution on 0.3.x so
> that users are not affected.
>
> michael, as you mentioned about explicit cleaning of session, i am
> doing that currently. let me quickly mention the flow of request so
> that you guys can have more information:
>
> - search request comes
> - if orm mapping is not created it's get created now (only happens
> one time) - new session is created and attached to the current
> thread (this is done so that different DAOs can access the same
> session from the current thread) - all orm queries are fired..
> results processed
> - finally, current thread is accessed again, session attached
> earlier is accessed, session.clear() invoked and del session done.
>
> what's the best way to deal with the problem now...
>
> thanks,
>
> - A
>
>
>
> On Wed, Jun 18, 2008 at 7:49 PM, Michael Bayer
> <[EMAIL PROTECTED]>
>
> wrote:
> > On Jun 18, 2008, at 9:59 AM, Arun Kumar PG wrote:
> > > hi all folks,
> > >
> > > i have a search form that allows user to search for records.  i
> > > am eager loading 4 attributes on the master object which
> > > results in 4 left outer joins in the sa's sql query. the
> > > problem is that when i look at the memory consumption using top
> > > command it looks crazy.
> > >
> > > the memory shoots up by 50-60 MB instantly (some times even
> > > 100+ MB). i executed the query on db directly and the results
> > > are returned in 3 secs (close to around 60,000 rows). sa is
> > > spending a good amount of time processing the results and while
> > > it is doing that i see abnormal memory growth. also the cpu is
> > > used almost 98% during this time.
> > >
> > > the interesting thing is that after processing the request the
> > > memory does not comes down. it stays there only. i dont know
> > > why its not gc'ed.
> > >
> > > my environment:
> > > - mysql 4.1
> > > - sa 3.9
> > > - python 2.4
> > >
> > > is there any chance that memory is getting leaked as i don't
> > > see memory come down even after some time.
> >
> > The Session in 0.3 does not lose references to any data loaded
> > automatically, it has to be cleaned out manually using
> > session.expunge(obj) or session.clear().    From 0.4 on forward
> > the Session is weak referencing so that unreferenced, clean
> > objects fall out of scope automatically.  0.4 also eager loads
> > many rows about 30% faster than 0.3 and 0.5 is then about 15%
> > faster than 0.4.     ORMs in general are designed for rich
> > in-memory functionality and are not optimized for loads of many
> > tens of thousands of rows, so for better performance overall
> > consider non-ORM access to these rows.



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to