Simon Cross escribió:
> On 6/5/07, jonhattan <[EMAIL PROTECTED]> wrote:
>> The memory grows and grows indefinitely. If I change to postgres the
>> problem dissapear. I've searched a lot about sqlobject+sqlite and
>> threading without finding a solution.
>
> I imagine that the important difference between Postgres and Sqlite is
> that Sqlite connections are only usable on the thread they're created
> on. So if you're starting 10 000 threads, SQLObject has to create 10
> 000 connections to the SQLite database while in the Postgres case
> threads can share connections.
I figure out that connections are destroyed when the thread finish.. So
it could be increase the load of the system, not memory consuming. Isn't
it ?
> It's possible that this alone is causing your problems (tests in our
> work code reliably trigger Sqlite problems with only 20 threads
> writing concurrently).
actually I have a limit of 20 threads:
while threading.activeCount() > 19:
sleep(5)
What I see is that threads are not being 'freed' as they finish. Perhaps
the question is 'how can I destroy all references to a SQLObject?'
If I do:
o = mySQLObject(item = xxx)
sys.getrefcount(o) - 1 # the value is 2
del o # still one reference so the object
is not removed from memory
> You might also want to check whether turning off caching in SQLObject
> helps at all (it's vaguely possible that the extra SQLite connections
> result in many more objects being cached). See
> http://www.sqlobject.org/module-sqlobject.cache.html.
I don't know if you mean something more than
class sqlmeta:
cacheValues = False
and
connectionForURI("sqlite:///mydb.db?cache=")
It seems they do nothing.
> If turning off caching doesn't help, I suggest simply rate limiting
> the threads, storing the results in a temporary array and then having
> the main thread do all the writes to sqlite (this shouldn't be any
> slower than having lots of threads write). Something like:
>
> import time
>
> class get_page(Thread):
> def __init__(self, id, results):
> self.id = id
> self.results = results
> Thread.__init__(self)
>
> def run():
> while True:
> if len(self.results) > 50:
> time.sleep(1)
> continue
> else:
> f = urlopen("http://example.com/article/%d" % id)
> self.results.append(f.read())
>
> # main
> tmpdata = []
> done = 0
>
> for i in xrange(1,10000):
> t = get_page(i,tmpdata)
> t.start()
>
> while done < 10000:
> if not tmpdata:
> time.sleep(1)
> else:
> mySQLObject(item = tmpdata.pop())
> done += 1
>
> This code is, of course, completely untested. :)
I'll try to implement that way and comment back.
thanks,
jonhattan
>
> Schiavo
> Simon
> --
> I don't see why people are picky about it when the Banach-Tarski
> paradox is clearly a Biblical principle - look at Mark 6:38-44. What,
> you have a different interpretation of the loaves and fishes thing?
> -- Daniel Martin, snowplow.org
>
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
sqlobject-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sqlobject-discuss