On Nov 24, 2009, at 10:17 AM, Kristján Valur Jónsson wrote: > Interesting. My crash that I fixed with the chance was precisely because a > _necessary_ slp_transfer_return() was missed because stackless thought that > it was in the correct stack. > An extra slp_transfer_return() should be ok regardless, _if_ your are in fact > returning from the _correct_ main tasklet.
As I said -- I haven't looked completely -- the crash happens because the tstate->frame is NULL (I would *guess* at something like: tstate->frame gets cleaned up and cleared, and then the slp_transfer_return dives back in....) > Is there a chance that you can create a mini-app that exhibits this behaviour? Hm...not easy. Probably easier for me to find why it thinks the slp_tranfer_return is necessary in situ... > K > >> -----Original Message----- >> From: stackless-boun...@stackless.com [mailto:stackless- >> boun...@stackless.com] On Behalf Of Jeff Senn >> Sent: 24. nóvember 2009 14:39 >> To: stackless list >> Subject: [Stackless] st.serial --> st.serial_last_jump patch >> >> >> Kristjan (or anyone else if you've looked at it)- >> >> RE: Your serial_last_jump patch... >> >> In my attempt to go to 2.6.4 I included your patch -- but it causes a >> crash >> (that goes away when I revert it out). I haven't debugged much -- but >> what >> appears to be happening is that serial_last_jump != serial in the main >> tasklet >> cleanup causing an unnecessary slp_transfer_return... (in my case there >> is >> only soft-switching and one thread that is known to python). >> >> It does happen in a case that one would not normally find: in >> particular >> I have stackless embedded in an app that make an instance of the >> interpreter >> and "feeds" it work to do... occasionally when it's "done", >> the main tasklet exits back to the controlling program -- but the >> interpreter >> can be called again in the future -- creating a new main >> tasket...etc... >> >> It is one one of these subsequent main tasklet exits that the >> unnecessary >> slp_transfer_return causes a crash. >> >> I'll continue to debug at some rate, but if you have some idea what >> might be >> wrong, please let me know... >> >> >> >> >> >> >> >> _______________________________________________ >> Stackless mailing list >> Stackless@stackless.com >> http://www.stackless.com/mailman/listinfo/stackless > > _______________________________________________ Stackless mailing list Stackless@stackless.com http://www.stackless.com/mailman/listinfo/stackless