I thought stacks were separated by redzone pages, but I'd have to double 
check.

I recommend rebooting with kmem_flags = 0xf  in your /etc/system.

    -- Garrett


Andrew Gallatin wrote:
> I just got a very strange panic when running a torture
> test on my GLDv3 driver:
>
> in.rshd:
> #pf Page fault
> Bad kernel fault at addr=0x0
> pid=16617, pc=0xfffffffff84f87a7, sp=0xffffff00083ec5f8, eflags=0x10246
> cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de>
> cr2: 0
> cr3: 3493000
> cr8: c
>
>          rdi: ffffff01d72a1a10 rsi: ffffff00083ec600 rdx: ffffff01d44eb120
>          rcx:                3  r8: ffffff01cfde8500  r9:              85b
>          rax:                0 rbx: ffffff01cecc9c60 rbp:                0
>          r10:        300004c57 r11: ffffff01e6d9c000 r12: ffffff01cecc9c60
>          r13: ffffff01d72a1a10 r14: ffffff01d2ecc080 r15: ffffff01d0935bb8
>          fsb:                0 gsb: ffffff01ceaa6ac0  ds:               4b
>           es:               4b  fs:                0  gs:              1c3
>          trp:                e err:                0 rip: fffffffff84f87a7
>           cs:               30 rfl:            10246 rsp: ffffff00083ec5f8
>           ss:               38
>
> ffffff00083ec3e0 unix:die+c8 ()
> ffffff00083ec4f0 unix:trap+13b9 ()
> ffffff00083ec500 unix:cmntrap+e9 ()
>
>
> According to mdb, there is no stack:
>
>  > $C
>
> I'm assuming the stack got corrupted somehow, but the current thread
> seems well within its stack:
>
>  > ffffff01d44eb120::print struct _kthread t_stkbase
> t_stkbase = 0xffffff00083e8000
>
>
> Is it possible some other stack smashed into this thread's stack, and
> trashed it?  Will Solaris panic if a thread exceeds its kernel stack
> space, or will it just corrupt whatever is below it?  How do I debug
> something like this?
>
> Thanks,
>
> Drew
> _______________________________________________
> networking-discuss mailing list
> [email protected]
>   

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to