I must be doing something silly and wrong, hopefully I can get some help from this list. I am writing a blocking call in the c api which is serviced by a [pool] of other threads, of course when the call is made I want to let stackless continue processing, so I did this, which works great:

PyObject* getData(PyObject *self, PyObject *args)
{
    // blah blah construct request on the heap
    request->task = PyStackless_GetCurrent();
    queueRequest( request ); // this will get picked up by the thread pool
PyStackless_Schedule( request->task ); // away it goes back into python-land

Py_DECREF( request->task ); // if we got here its because we were woken up

     // blah blah construct response and return
}


The thread pool does this:


worker()
{
// blah blah wait on queue and get a job and do some work waiting for awhile (or returning immediately)

while( !PyTasklet_Paused(request->task) ) // prevent race (deadlock if this runs before the _Schedule call)
        {
              Sleep(0); // yield
        }

        PyTasklet_Insert( request->task );
}


This works just fine. Until I stress it. this program from python crashes instantly in unpredicatbale (corrupt memory) ways:

import stackless
import dbase

def test():
    while 1:
        dbase.request()

for i in range( 10 ):
    task = stackless.tasklet( test )()
while 1:
    stackless.schedule()


BUT if I change that range from 10 to 1? runs all day long without any problems. Clearly I am doing something that isn't thread safe. but what? I've tried placing mutexes around the PyTask_ calls but since schedule doesn't normally return thats not possible everywhere, If I add Sleep's it helps, in fact if I make them long enough it fixes the problem and all 10 threads run concurrently.

I assume PyTasklet_Schedule()/_Insert() are meant to be used this way, (ie- multiple threads) can anyone tell me which stupid way I have gone wrong?


_______________________________________________
Stackless mailing list
[email protected]
http://www.stackless.com/mailman/listinfo/stackless

Reply via email to