On Mon, Jul 11, 2011 at 7:39 PM, Anselm Kruis <[email protected]> wrote: > If I understand the commit comment correctly, there were crashes on OS-X. > But what about Linux? I'm not aware of problems on Linux amd64. Can we make > the longer REGS_TO_SAVE list a conditional define? Something like: > > #ifdef SLP_SAVE_FRAME_POINTER > +#define REGS_TO_SAVE "rdx", "rbx", "r12", "r13", "r14", "r15", "r9", "r8", > "rdi", "rsi", "rcx", "rbp" > #else > +#define REGS_TO_SAVE "rdx", "rbx", "r12", "r13", "r14", "r15", "r9", "r8", > "rdi", "rsi", "rcx" > #endif
Hmm. Why do you need to save so many registers, especially when all of them except rbp, rbx, r12-r15 are not expected to be saved across calls anyway? While I haven't been working with stackless, I have recently been working a lot on greenlet, where I fixed a lot of weird crashes, and here's what I found: 1. crashes can happen unless global variables used at "double return" points are not marked volatile. This is because gcc optimizer might cache the global value and the next time the same code path is entered after the switch would use the cached, and not the updated, value. In case of greenlet this was ts_current and ts_target. In case of stackless at least _cst might need to be marked as volatile. If anyone is interested you can read my discussion here: https://bitbucket.org/ambroff/greenlet/issue/19/fix-stability-issues 2. crashes can happen if there's any C-stack switch during garbage collection. The reason for this is that Python is creating gc lists on the stack, and if during tp_clear there are C-stack switches (as in, to kill an alive greenlet/tasklet), then those variables might get clobbered. If during the switch there are deallocations, and list head is updated, it would lead to random stack corruptions. Now these conditions are extremely rare, but they can happen and can lead to weird crashes. If anyone is interested you can read how I found this issue here: https://bitbucket.org/ambroff/greenlet/issue/24/debian-64-errors-with-pydebug I don't know enough about stackless internals, so maybe these are already addressed or don't apply at all. But if they apply, you might want to fix it in Stackless as well. Best regards, Alexey. _______________________________________________ Stackless mailing list [email protected] http://www.stackless.com/mailman/listinfo/stackless
