I have also tried sequential method vs. forked method vs. the default
threaded method for the processmethod of the scheduler. I have tried
creating my own sqlobject connection with a URI and
sqlhub.processConnection but it seems to have the same results.
On Feb 19, 3:02 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> Still wrestling with this, anyone have any suggestions of somethings I
> should maybe try?
>
> Thanks.
>
> On Feb 17, 4:17 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
>
> > Thanks for the reply. I tried what you suggested, still with the same
> > results.
> > I also tried some things from another discussion I found on here, but
> > that didn't work either.
> > I am using TG 1.0 and SQLObject
>
> > # tried both with and without the extra URI setting
> > from turbogears import database
> > database.set_db_uri('postgres://URI')
>
> > # No errors or exceptions from this code, but same result, TG/
> > SQLObject hangs
> > # on to the DB connection and doesn't let it go.
> > from sqlobject.util.threadinglocal import local as threading_local
> > hub.threadingLocal = threading_local()
> > hub.begin()
> > loads = Test.select()
> > hub.commit()
> > hub.end()
>
> > The scheduler in production runs every 10 minutes and with 100 DB
> > connections available, I have to restart TG every 16 hours at to make
> > sure we don't get DB connect limited exceeded errors. I have debug and
> > debugOutput turned on for SQLObject but I am not seeing anything that
> > is really helping me figure out what I should do. I will look for more
> > information about acquiring and releasing DB connections from a
> > scheduler thread.
>
> > On Feb 17, 2:29 am, "Richard Clark" <[EMAIL PROTECTED]> wrote:
>
> > > Ok, keep in mind this stuff is for tg 1.0, it may not apply to later
> > > versions.
>
> > > The important thing to remember about the scheduler is that you're
> > > running in a "user" created thread when your job gets called, you're
> > > not in a tg created thread.
>
> > > As such, the database needs managing in the same fashion it would if
> > > you were running your own thread. You need to acquire a thread local
> > > store, create a new database connection, etc.
>
> > > It works basically like this, in your scheduled function:
>
> > > def MyScheduledFunction:
> > > # By getting the correct threading_local, the hub will presume it
> > > isn't already connected (if this thread hasn't already been used) and
> > > get you a connection
> > > from sqlobject.util.threadinglocal import local as threading_local
> > > hub.threadingLocal = threading_local()
> > > hub.begin()
> > > # now you can do your stuff
> > > hub.commit()
>
> > > This may or may not help with your particular problem, I have dozens
> > > of solutions to things that slowly go out of fashion as turbogears
> > > moves on :)
>
> > > > Everytime the job runs (10 seconds) it opens a Postgres connection and
> > > > leaves it in the idle in tranaction state. I can't figure out how to
> > > > get the job to properly close the connection. I tried adding
> > > > hub.begin() before and hub.end() after the select() call , but with
> > > > the same results. Any ideas or nudges in the right direction? Thanks.
>
> > > > I am using a file called jobs.py
>
> > > > from model import Test, hub
> > > > def test_schedule():
> > > > test_result = Test.select()
> > > > return
>
> > > > def schedule():
> > > > from turbogears import scheduler
> > > > scheduler.add_interval_task(action=test_schedule,
> > > > taskname='Test Schedule',
> > > > initialdelay=0,
> > > > interval=10)
> > > > return
>
> > > > In my start-test.py I have the entry
>
> > > > from test import jobs
> > > > jobs.schedule()
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/turbogears?hl=en
-~----------~----~----~----~------~----~------~--~---