Re: Glibc recvmsg from kernel netlink socket hangs forever

2015-10-05 Thread Steven Schlansker

On Sep 25, 2015, at 7:58 PM, Guenter Roeck  wrote:

> On 09/25/2015 02:37 PM, Steven Schlansker wrote:
>> 
>> 
>> Thank you for the patches to try, I'll build a kernel with them early next 
>> week
>> and report back.  It sounds like it may not match my problem exactly so we'll
>> see.
>> 
> 
> For 4.0.x, you _really_ need to update to 4.0.9 to get the following two 
> patches.
> 
> cf8befcc1a55 netlink: Disable insertions/removals during rehash
> 18889a4315a5 netlink: Reset portid after netlink_insert failure

Hi Guenter,

Thank you very much for the information.  We upgraded to 4.0.9 and all 
indications are that
the issue is gone.  I will follow up if that is not the case.

Thank you everyone for your guidance.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Glibc recvmsg from kernel netlink socket hangs forever

2015-10-05 Thread Steven Schlansker

On Sep 25, 2015, at 7:58 PM, Guenter Roeck <li...@roeck-us.net> wrote:

> On 09/25/2015 02:37 PM, Steven Schlansker wrote:
>> 
>> 
>> Thank you for the patches to try, I'll build a kernel with them early next 
>> week
>> and report back.  It sounds like it may not match my problem exactly so we'll
>> see.
>> 
> 
> For 4.0.x, you _really_ need to update to 4.0.9 to get the following two 
> patches.
> 
> cf8befcc1a55 netlink: Disable insertions/removals during rehash
> 18889a4315a5 netlink: Reset portid after netlink_insert failure

Hi Guenter,

Thank you very much for the information.  We upgraded to 4.0.9 and all 
indications are that
the issue is gone.  I will follow up if that is not the case.

Thank you everyone for your guidance.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Glibc recvmsg from kernel netlink socket hangs forever

2015-09-25 Thread Steven Schlansker

On Sep 25, 2015, at 2:37 PM, Steven Schlansker  
wrote:

> 
> On Sep 24, 2015, at 10:34 PM, Guenter Roeck  wrote:
> 
>> Herbert,
>> 
>> On 09/24/2015 09:58 PM, Herbert Xu wrote:
>>> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>>> 
>>>> http://comments.gmane.org/gmane.linux.network/363085
>>>> 
>>>> might explain your problem.
>>>> 
>>>> I thought this was resolved in 4.1, but it looks like the problem still 
>>>> persists
>>>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are 
>>>> still
>>>> affected. I don't know if there have been any relevant changes in 4.2.
>>>> 
>>>> Copying Herbert and Eric for additional input.
>>> 
>>> There was a separate bug discovered by Tejun recently.  You need
>>> to apply the patches
>>> 
>>> https://patchwork.ozlabs.org/patch/519245/
>>> https://patchwork.ozlabs.org/patch/520824/
>>> 
>> I assume this is on top of mainline ?
>> 
>>> There is another follow-up but it shouldn't make any difference
>>> in practice.
>>> 
>> 
>> Any idea what may be needed for 4.1 ?
>> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
>> but I have no idea if that will help with the problem we are seeing there.
> 
> Thank you for the patches to try, I'll build a kernel with them early next 
> week
> and report back.  It sounds like it may not match my problem exactly so we'll
> see.
Huh, when it rains, it pours... now I have a legit panic too!

[ 1675.228701] BUG: unable to handle kernel paging request at fe70
[ 1675.232058] IP: [] netlink_compare+0xa/0x30
[ 1675.232058] PGD 2015067 PUD 2017067 PMD 0 
[ 1675.232058] Oops:  [#1] SMP 
[ 1675.232058] Modules linked in: i2c_piix4(E) btrfs(E) crct10dif_pclmul(E) 
crc32_pclmul(E) ghash_clmulni_intel(E) aesni_intel(E) aes_x86_64(E) lrw(E) 
gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) floppy(E)
[ 1675.232058] CPU: 2 PID: 11152 Comm: pf_dump Tainted: GE   4.0.4 
#1
[ 1675.232058] Hardware name: Xen HVM domU, BIOS 4.2.amazon 05/06/2015
[ 1675.232058] task: 880150fa6480 ti: 880150fb4000 task.ti: 
880150fb4000
[ 1675.232058] RIP: 0010:[]  [] 
netlink_compare+0xa/0x30
[ 1675.232058] RSP: 0018:880150fb7d10  EFLAGS: 00010246
[ 1675.232058] RAX:  RBX: 023e503b RCX: 0561f992
[ 1675.232058] RDX: fffc27e4 RSI: 880150fb7db8 RDI: fbb8
[ 1675.232058] RBP: 880150fb7d58 R08: 8805a82f5ab8 R09: 000c
[ 1675.232058] R10:  R11: 0202 R12: 
[ 1675.232058] R13: 8175dce0 R14: 88008b37e800 R15: 88076db4
[ 1675.232058] FS:  7feec2440700() GS:88078fc4() 
knlGS:
[ 1675.232058] CS:  0010 DS:  ES:  CR0: 80050033
[ 1675.232058] CR2: fe70 CR3: 00053bd17000 CR4: 001407e0
[ 1675.232058] Stack:
[ 1675.232058]  81434dae 88076d864400 880150fb7db8 
8801559ee8b8
[ 1675.232058]  88076db4 8805a82f5c48 88008b37e800 
88076d864400
[ 1675.232058]   880150fb7da8 81435476 
880150fb7db8
[ 1675.232058] Call Trace:
[ 1675.232058]  [] ? rhashtable_lookup_compare+0x5e/0xb0
[ 1675.232058]  [] rhashtable_lookup_compare_insert+0x66/0xc0
[ 1675.232058]  [] netlink_insert+0x83/0xe0
[ 1675.232058]  [] netlink_autobind.isra.34+0xad/0xd0
[ 1675.232058]  [] netlink_bind+0x1b1/0x240
[ 1675.232058]  [] SYSC_bind+0xb8/0xf0
[ 1675.232058]  [] ? __audit_syscall_entry+0xb4/0x110
[ 1675.232058]  [] ? do_audit_syscall_entry+0x6c/0x70
[ 1675.232058]  [] ? syscall_trace_enter_phase1+0x123/0x180
[ 1675.232058]  [] ? syscall_trace_leave+0xc6/0x120
[ 1675.232058]  [] ? fd_install+0x25/0x30
[ 1675.232058]  [] SyS_bind+0xe/0x10
[ 1675.232058]  [] system_call_fastpath+0x16/0x1b
[ 1675.232058] Code: 00 8b 77 08 39 77 14 8d 4e 01 41 0f 44 c9 41 39 c8 89 4f 
08 74 09 48 8b 08 83 3c 11 04 74 e2 5d c3 0f 1f 44 00 00 31 c0 8b 56 08 <39> 97 
b8 02 00 00 55 48 89 e5 74 0a 5d c3 0f 1f 84 00 00 00 00 
[ 1675.232058] RIP  [] netlink_compare+0xa/0x30
[ 1675.232058]  RSP 
[ 1675.232058] CR2: fe70
[ 1675.232058] ---[ end trace 963ff50a058120d0 ]---
[ 1675.232058] Kernel panic - not syncing: Fatal exception in interrupt
[ 1675.232058] Kernel Offset: 0x0 from 0x8100 (relocation range: 
0x8000-0x9fff)



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Glibc recvmsg from kernel netlink socket hangs forever

2015-09-25 Thread Steven Schlansker

On Sep 24, 2015, at 10:34 PM, Guenter Roeck  wrote:

> Herbert,
> 
> On 09/24/2015 09:58 PM, Herbert Xu wrote:
>> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>> 
>>> http://comments.gmane.org/gmane.linux.network/363085
>>> 
>>> might explain your problem.
>>> 
>>> I thought this was resolved in 4.1, but it looks like the problem still 
>>> persists
>>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are 
>>> still
>>> affected. I don't know if there have been any relevant changes in 4.2.
>>> 
>>> Copying Herbert and Eric for additional input.
>> 
>> There was a separate bug discovered by Tejun recently.  You need
>> to apply the patches
>> 
>> https://patchwork.ozlabs.org/patch/519245/
>> https://patchwork.ozlabs.org/patch/520824/
>> 
> I assume this is on top of mainline ?
> 
>> There is another follow-up but it shouldn't make any difference
>> in practice.
>> 
> 
> Any idea what may be needed for 4.1 ?
> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
> but I have no idea if that will help with the problem we are seeing there.

Thank you for the patches to try, I'll build a kernel with them early next week
and report back.  It sounds like it may not match my problem exactly so we'll
see.

In the meantime, I also observed the following oops:

[ 1709.620092] kernel tried to execute NX-protected page - exploit attempt? 
(uid: 0)
[ 1709.624058] BUG: unable to handle kernel paging request at ea001dbef3c0
[ 1709.624058] IP: [] 0xea001dbef3c0
[ 1709.624058] PGD 78f7dc067 PUD 78f7db067 PMD 80078ec001e3 
[ 1709.624058] Oops: 0011 [#1] SMP 
[ 1709.624058] Modules linked in: i2c_piix4(E) btrfs(E) crct10dif_pclmul(E) 
crc32_pclmul(E) ghash_clmulni_intel(E) aesni_intel(E) aes_x86_64(E) lrw(E) 
gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) floppy(E)
[ 1709.624058] CPU: 4 PID: 19714 Comm: pf_dump Tainted: GE   4.0.4 
#1
[ 1709.624058] Hardware name: Xen HVM domU, BIOS 4.2.amazon 05/06/2015
[ 1709.624058] task: 880605a18000 ti: 8805f9358000 task.ti: 
8805f9358000
[ 1709.624058] RIP: 0010:[]  [] 
0xea001dbef3c0
[ 1709.624058] RSP: 0018:8805f935bbc0  EFLAGS: 00010246
[ 1709.624058] RAX: ea001dbef3c0 RBX: 0007 RCX: 
[ 1709.624058] RDX: 2100 RSI: 8805f992f308 RDI: 8806622f6b00
[ 1709.624058] RBP: 8805f935bc08 R08: 1ec0 R09: 2100
[ 1709.624058] R10:  R11: 880771003200 R12: 8806622f6b00
[ 1709.624058] R13: 0002 R14: 8239e238 R15: 8805f992f308
[ 1709.624058] FS:  7f0735f29700() GS:88078fc8() 
knlGS:
[ 1709.624058] CS:  0010 DS:  ES:  CR0: 80050033
[ 1709.624058] CR2: ea001dbef3c0 CR3: 0005f7e88000 CR4: 001407e0
[ 1709.624058] Stack:
[ 1709.624058]  81735ca2  8805f992f348 
88076b491400
[ 1709.624058]  8805f992f000 8806622f6b00 0ec0 
8805f992f308
[ 1709.624058]  88065ffb 8805f935bc38 8176028a 
8805f992f000
[ 1709.624058] Call Trace:
[ 1709.624058]  [] ? rtnl_dump_all+0x122/0x1a0
[ 1709.624058]  [] netlink_dump+0x11a/0x2d0
[ 1709.624058]  [] netlink_recvmsg+0x1e5/0x360
[ 1709.624058]  [] ? kmem_cache_free+0x1b9/0x1d0
[ 1709.624058]  [] sock_recvmsg+0x6f/0xa0
[ 1709.624058]  [] ___sys_recvmsg+0xe4/0x200
[ 1709.624058]  [] ? __fget_light+0x25/0x70
[ 1709.624058]  [] __sys_recvmsg+0x42/0x80
[ 1709.624058]  [] ? int_check_syscall_exit_work+0x34/0x3d
[ 1709.624058]  [] SyS_recvmsg+0x12/0x20
[ 1709.624058]  [] system_call_fastpath+0x16/0x1b
[ 1709.624058] Code: 00 00 00 ff ff ff ff 01 00 00 00 00 01 10 00 00 00 ad de 
00 02 20 00 00 00 ad de 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 
00 00 00 ff ff 02 00 00 00 00 00 00 00 00 00 00 00 00 00 
[ 1709.798299] RIP  [] 0xea001dbef3c0
[ 1709.798299]  RSP 
[ 1709.798299] CR2: ea001dbef3c0
[ 1709.798299] ---[ end trace 2e069ceceed3d61a ]---

It's so far only been noticed once.  I don't know if it is the same issue, it 
certainly doesn't always happen when this problem occurs,
but it looks curious all the same...

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Glibc recvmsg from kernel netlink socket hangs forever

2015-09-25 Thread Steven Schlansker

On Sep 24, 2015, at 10:34 PM, Guenter Roeck  wrote:

> Herbert,
> 
> On 09/24/2015 09:58 PM, Herbert Xu wrote:
>> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>> 
>>> http://comments.gmane.org/gmane.linux.network/363085
>>> 
>>> might explain your problem.
>>> 
>>> I thought this was resolved in 4.1, but it looks like the problem still 
>>> persists
>>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are 
>>> still
>>> affected. I don't know if there have been any relevant changes in 4.2.
>>> 
>>> Copying Herbert and Eric for additional input.
>> 
>> There was a separate bug discovered by Tejun recently.  You need
>> to apply the patches
>> 
>> https://patchwork.ozlabs.org/patch/519245/
>> https://patchwork.ozlabs.org/patch/520824/
>> 
> I assume this is on top of mainline ?
> 
>> There is another follow-up but it shouldn't make any difference
>> in practice.
>> 
> 
> Any idea what may be needed for 4.1 ?
> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
> but I have no idea if that will help with the problem we are seeing there.

Thank you for the patches to try, I'll build a kernel with them early next week
and report back.  It sounds like it may not match my problem exactly so we'll
see.

In the meantime, I also observed the following oops:

[ 1709.620092] kernel tried to execute NX-protected page - exploit attempt? 
(uid: 0)
[ 1709.624058] BUG: unable to handle kernel paging request at ea001dbef3c0
[ 1709.624058] IP: [] 0xea001dbef3c0
[ 1709.624058] PGD 78f7dc067 PUD 78f7db067 PMD 80078ec001e3 
[ 1709.624058] Oops: 0011 [#1] SMP 
[ 1709.624058] Modules linked in: i2c_piix4(E) btrfs(E) crct10dif_pclmul(E) 
crc32_pclmul(E) ghash_clmulni_intel(E) aesni_intel(E) aes_x86_64(E) lrw(E) 
gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) floppy(E)
[ 1709.624058] CPU: 4 PID: 19714 Comm: pf_dump Tainted: GE   4.0.4 
#1
[ 1709.624058] Hardware name: Xen HVM domU, BIOS 4.2.amazon 05/06/2015
[ 1709.624058] task: 880605a18000 ti: 8805f9358000 task.ti: 
8805f9358000
[ 1709.624058] RIP: 0010:[]  [] 
0xea001dbef3c0
[ 1709.624058] RSP: 0018:8805f935bbc0  EFLAGS: 00010246
[ 1709.624058] RAX: ea001dbef3c0 RBX: 0007 RCX: 
[ 1709.624058] RDX: 2100 RSI: 8805f992f308 RDI: 8806622f6b00
[ 1709.624058] RBP: 8805f935bc08 R08: 1ec0 R09: 2100
[ 1709.624058] R10:  R11: 880771003200 R12: 8806622f6b00
[ 1709.624058] R13: 0002 R14: 8239e238 R15: 8805f992f308
[ 1709.624058] FS:  7f0735f29700() GS:88078fc8() 
knlGS:
[ 1709.624058] CS:  0010 DS:  ES:  CR0: 80050033
[ 1709.624058] CR2: ea001dbef3c0 CR3: 0005f7e88000 CR4: 001407e0
[ 1709.624058] Stack:
[ 1709.624058]  81735ca2  8805f992f348 
88076b491400
[ 1709.624058]  8805f992f000 8806622f6b00 0ec0 
8805f992f308
[ 1709.624058]  88065ffb 8805f935bc38 8176028a 
8805f992f000
[ 1709.624058] Call Trace:
[ 1709.624058]  [] ? rtnl_dump_all+0x122/0x1a0
[ 1709.624058]  [] netlink_dump+0x11a/0x2d0
[ 1709.624058]  [] netlink_recvmsg+0x1e5/0x360
[ 1709.624058]  [] ? kmem_cache_free+0x1b9/0x1d0
[ 1709.624058]  [] sock_recvmsg+0x6f/0xa0
[ 1709.624058]  [] ___sys_recvmsg+0xe4/0x200
[ 1709.624058]  [] ? __fget_light+0x25/0x70
[ 1709.624058]  [] __sys_recvmsg+0x42/0x80
[ 1709.624058]  [] ? int_check_syscall_exit_work+0x34/0x3d
[ 1709.624058]  [] SyS_recvmsg+0x12/0x20
[ 1709.624058]  [] system_call_fastpath+0x16/0x1b
[ 1709.624058] Code: 00 00 00 ff ff ff ff 01 00 00 00 00 01 10 00 00 00 ad de 
00 02 20 00 00 00 ad de 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 
00 00 00 ff ff 02 00 00 00 00 00 00 00 00 00 00 00 00 00 
[ 1709.798299] RIP  [] 0xea001dbef3c0
[ 1709.798299]  RSP 
[ 1709.798299] CR2: ea001dbef3c0
[ 1709.798299] ---[ end trace 2e069ceceed3d61a ]---

It's so far only been noticed once.  I don't know if it is the same issue, it 
certainly doesn't always happen when this problem occurs,
but it looks curious all the same...

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Glibc recvmsg from kernel netlink socket hangs forever

2015-09-25 Thread Steven Schlansker

On Sep 25, 2015, at 2:37 PM, Steven Schlansker <stevenschlans...@gmail.com> 
wrote:

> 
> On Sep 24, 2015, at 10:34 PM, Guenter Roeck <li...@roeck-us.net> wrote:
> 
>> Herbert,
>> 
>> On 09/24/2015 09:58 PM, Herbert Xu wrote:
>>> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>>> 
>>>> http://comments.gmane.org/gmane.linux.network/363085
>>>> 
>>>> might explain your problem.
>>>> 
>>>> I thought this was resolved in 4.1, but it looks like the problem still 
>>>> persists
>>>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are 
>>>> still
>>>> affected. I don't know if there have been any relevant changes in 4.2.
>>>> 
>>>> Copying Herbert and Eric for additional input.
>>> 
>>> There was a separate bug discovered by Tejun recently.  You need
>>> to apply the patches
>>> 
>>> https://patchwork.ozlabs.org/patch/519245/
>>> https://patchwork.ozlabs.org/patch/520824/
>>> 
>> I assume this is on top of mainline ?
>> 
>>> There is another follow-up but it shouldn't make any difference
>>> in practice.
>>> 
>> 
>> Any idea what may be needed for 4.1 ?
>> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
>> but I have no idea if that will help with the problem we are seeing there.
> 
> Thank you for the patches to try, I'll build a kernel with them early next 
> week
> and report back.  It sounds like it may not match my problem exactly so we'll
> see.
Huh, when it rains, it pours... now I have a legit panic too!

[ 1675.228701] BUG: unable to handle kernel paging request at fe70
[ 1675.232058] IP: [] netlink_compare+0xa/0x30
[ 1675.232058] PGD 2015067 PUD 2017067 PMD 0 
[ 1675.232058] Oops:  [#1] SMP 
[ 1675.232058] Modules linked in: i2c_piix4(E) btrfs(E) crct10dif_pclmul(E) 
crc32_pclmul(E) ghash_clmulni_intel(E) aesni_intel(E) aes_x86_64(E) lrw(E) 
gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) floppy(E)
[ 1675.232058] CPU: 2 PID: 11152 Comm: pf_dump Tainted: GE   4.0.4 
#1
[ 1675.232058] Hardware name: Xen HVM domU, BIOS 4.2.amazon 05/06/2015
[ 1675.232058] task: 880150fa6480 ti: 880150fb4000 task.ti: 
880150fb4000
[ 1675.232058] RIP: 0010:[]  [] 
netlink_compare+0xa/0x30
[ 1675.232058] RSP: 0018:880150fb7d10  EFLAGS: 00010246
[ 1675.232058] RAX:  RBX: 023e503b RCX: 0561f992
[ 1675.232058] RDX: fffc27e4 RSI: 880150fb7db8 RDI: fbb8
[ 1675.232058] RBP: 880150fb7d58 R08: 8805a82f5ab8 R09: 000c
[ 1675.232058] R10:  R11: 0202 R12: 
[ 1675.232058] R13: 8175dce0 R14: 88008b37e800 R15: 88076db4
[ 1675.232058] FS:  7feec2440700() GS:88078fc4() 
knlGS:
[ 1675.232058] CS:  0010 DS:  ES:  CR0: 80050033
[ 1675.232058] CR2: fe70 CR3: 00053bd17000 CR4: 001407e0
[ 1675.232058] Stack:
[ 1675.232058]  81434dae 88076d864400 880150fb7db8 
8801559ee8b8
[ 1675.232058]  88076db4 8805a82f5c48 88008b37e800 
88076d864400
[ 1675.232058]   880150fb7da8 81435476 
880150fb7db8
[ 1675.232058] Call Trace:
[ 1675.232058]  [] ? rhashtable_lookup_compare+0x5e/0xb0
[ 1675.232058]  [] rhashtable_lookup_compare_insert+0x66/0xc0
[ 1675.232058]  [] netlink_insert+0x83/0xe0
[ 1675.232058]  [] netlink_autobind.isra.34+0xad/0xd0
[ 1675.232058]  [] netlink_bind+0x1b1/0x240
[ 1675.232058]  [] SYSC_bind+0xb8/0xf0
[ 1675.232058]  [] ? __audit_syscall_entry+0xb4/0x110
[ 1675.232058]  [] ? do_audit_syscall_entry+0x6c/0x70
[ 1675.232058]  [] ? syscall_trace_enter_phase1+0x123/0x180
[ 1675.232058]  [] ? syscall_trace_leave+0xc6/0x120
[ 1675.232058]  [] ? fd_install+0x25/0x30
[ 1675.232058]  [] SyS_bind+0xe/0x10
[ 1675.232058]  [] system_call_fastpath+0x16/0x1b
[ 1675.232058] Code: 00 8b 77 08 39 77 14 8d 4e 01 41 0f 44 c9 41 39 c8 89 4f 
08 74 09 48 8b 08 83 3c 11 04 74 e2 5d c3 0f 1f 44 00 00 31 c0 8b 56 08 <39> 97 
b8 02 00 00 55 48 89 e5 74 0a 5d c3 0f 1f 84 00 00 00 00 
[ 1675.232058] RIP  [] netlink_compare+0xa/0x30
[ 1675.232058]  RSP 
[ 1675.232058] CR2: fe70
[ 1675.232058] ---[ end trace 963ff50a058120d0 ]---
[ 1675.232058] Kernel panic - not syncing: Fatal exception in interrupt
[ 1675.232058] Kernel Offset: 0x0 from 0x8100 (relocation range: 
0x8000-0x9fff)



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Glibc recvmsg from kernel netlink socket hangs forever

2015-09-24 Thread Steven Schlansker
Hello linux-kernel,

I write to you on behalf of many developers at my company, who
are having trouble with their applications endlessly locking up
inside of libc code, with no hope of recovery.

Currently it affects our Mono and Node processes mostly, and the
symptoms are the same:  user code invokes getaddrinfo, and libc
attempts to determine whether ipv4 or ipv6 is appropriate, by using
the RTM_GETADDR netlink message.  The write into the netlink socket
succeeds, and it immediately reads back the results ... and waits
forever.  The read never returns.  The stack looks like this:

#0  0x7fd7d8d214ad in recvmsg () at ../sysdeps/unix/syscall-template.S:81
#1  0x7fd7d8d3e44d in make_request (fd=fd@entry=13, pid=1) at 
../sysdeps/unix/sysv/linux/check_pf.c:177
#2  0x7fd7d8d3e9a4 in __check_pf (seen_ipv4=seen_ipv4@entry=0x7fd7d37fdd00, 
seen_ipv6=seen_ipv6@entry=0x7fd7d37fdd10, 
in6ai=in6ai@entry=0x7fd7d37fdd40, in6ailen=in6ailen@entry=0x7fd7d37fdd50) 
at ../sysdeps/unix/sysv/linux/check_pf.c:341
#3  0x7fd7d8cf64e1 in __GI_getaddrinfo (name=0x31216e0 
"mesos-slave4-prod-uswest2.otsql.opentable.com", service=0x0, 
hints=0x31216b0, pai=0x31f09e8) at ../sysdeps/posix/getaddrinfo.c:2355
#4  0x00e101c8 in uv__getaddrinfo_work (w=0x31f09a0) at 
../deps/uv/src/unix/getaddrinfo.c:102
#5  0x00e09179 in worker (arg=) at 
../deps/uv/src/threadpool.c:91
#6  0x00e16eb1 in uv__thread_start (arg=) at 
../deps/uv/src/unix/thread.c:49
#7  0x7fd7d8ff3182 in start_thread (arg=0x7fd7d37fe700) at 
pthread_create.c:312
#8  0x7fd7d8d2047d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(libuv is part of Node and makes DNS lookups "asynchronous" by having
a thread pool in the background working)

The applications will run for hours or days successfully, until eventually 
hanging with
no apparent pattern or cause.  And once this hang happens it hangs badly, 
because
check_pf is holding a lock during the problematic recvmsg call.

I raised this issue on the libc-help mailing list, but I'm hoping that lkml will
have a higher number of people familiar with netlink that may better offer 
advice.
The original thread is here:
https://sourceware.org/ml/libc-help/2015-09/msg00014.html

Looking at the getaddrinfo / check_pf source code:
https://fossies.org/dox/glibc-2.22/sysdeps_2unix_2sysv_2linux_2check__pf_8c_source.html

146  if (TEMP_FAILURE_RETRY (__sendto (fd, (void *) , sizeof (req), 0,
147  (struct sockaddr *) ,
148  sizeof (nladdr))) < 0)
149goto out_fail;
150 
151  bool done = false;
152 
153  bool seen_ipv4 = false;
154  bool seen_ipv6 = false;
155 
156  do
157  {
158struct msghdr msg =
159{
160  (void *) , sizeof (nladdr),
161  , 1,
162  NULL, 0,
163  0
164};
165 
166  ssize_t read_len = TEMP_FAILURE_RETRY (__recvmsg (fd, , 0));
167  if (read_len <= 0)
168goto out_fail;
169 
170  if (msg.msg_flags & MSG_TRUNC)
171goto out_fail;
172 

I notice that there is possibility that if messages are dropped either on send
or receive side, maybe this code will hang forever?  The netlink(7) man page 
makes
me slightly worried:

> Netlink is not a reliable protocol.  It tries its best to deliver a message 
> to its destination(s), but may drop messages when an out-of-memory  condition 
>  or  other error occurs.
> However,  reliable  transmissions from kernel to user are impossible in any 
> case.  The kernel can't send a netlink message if the socket buffer is full: 
> the message will be dropped and the kernel and the user-space process will no 
> longer have the same view of kernel state.  It is up to the application to 
> detect when  this  happens (via the ENOBUFS error returned by recvmsg(2)) and 
> resynchronize.


I have taken the glibc code and created a simple(r) program to attempt to 
reproduce this issue.
I inserted some simple polling between the sendto and recvmsg calls to make the 
failure case more evident:

  struct pollfd pfd;
  pfd.fd = fd;
  pfd.events = POLLIN;
  pfd.revents = 0;

  int pollresult = poll(, 1, 1000);
  if (pollresult < 0) {
perror("glibc: check_pf: poll");
abort();
  } else if (pollresult == 0 || pfd.revents & POLLIN == 0) {
fprintf(stderr, "[%ld] glibc: check_pf: netlink socket read timeout\n", 
gettid());
abort();
  }

I have placed the full source code and strace output here:
https://gist.github.com/stevenschlansker/6ad46c5ccb22bc4f3473

The process quickly sends off hundreds of threads which sit in a
loop attempting this RTM_GETADDR message exchange.

The code may be compiled as "gcc -o pf_dump -pthread pf_dump.c"

An example invocation that quickly fails:

root@24bf2e440b5e:/# strace -ff -o pfd ./pf_dump 
[3700] exit success
glibc: check_pf: netlink socket read timeout
Aborted (core dumped)

Interestingly, this seems to be very easy to reproduce using pthreads, but much 
less
common with fork() or clone()d threads.  I'm 

Glibc recvmsg from kernel netlink socket hangs forever

2015-09-24 Thread Steven Schlansker
Hello linux-kernel,

I write to you on behalf of many developers at my company, who
are having trouble with their applications endlessly locking up
inside of libc code, with no hope of recovery.

Currently it affects our Mono and Node processes mostly, and the
symptoms are the same:  user code invokes getaddrinfo, and libc
attempts to determine whether ipv4 or ipv6 is appropriate, by using
the RTM_GETADDR netlink message.  The write into the netlink socket
succeeds, and it immediately reads back the results ... and waits
forever.  The read never returns.  The stack looks like this:

#0  0x7fd7d8d214ad in recvmsg () at ../sysdeps/unix/syscall-template.S:81
#1  0x7fd7d8d3e44d in make_request (fd=fd@entry=13, pid=1) at 
../sysdeps/unix/sysv/linux/check_pf.c:177
#2  0x7fd7d8d3e9a4 in __check_pf (seen_ipv4=seen_ipv4@entry=0x7fd7d37fdd00, 
seen_ipv6=seen_ipv6@entry=0x7fd7d37fdd10, 
in6ai=in6ai@entry=0x7fd7d37fdd40, in6ailen=in6ailen@entry=0x7fd7d37fdd50) 
at ../sysdeps/unix/sysv/linux/check_pf.c:341
#3  0x7fd7d8cf64e1 in __GI_getaddrinfo (name=0x31216e0 
"mesos-slave4-prod-uswest2.otsql.opentable.com", service=0x0, 
hints=0x31216b0, pai=0x31f09e8) at ../sysdeps/posix/getaddrinfo.c:2355
#4  0x00e101c8 in uv__getaddrinfo_work (w=0x31f09a0) at 
../deps/uv/src/unix/getaddrinfo.c:102
#5  0x00e09179 in worker (arg=) at 
../deps/uv/src/threadpool.c:91
#6  0x00e16eb1 in uv__thread_start (arg=) at 
../deps/uv/src/unix/thread.c:49
#7  0x7fd7d8ff3182 in start_thread (arg=0x7fd7d37fe700) at 
pthread_create.c:312
#8  0x7fd7d8d2047d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(libuv is part of Node and makes DNS lookups "asynchronous" by having
a thread pool in the background working)

The applications will run for hours or days successfully, until eventually 
hanging with
no apparent pattern or cause.  And once this hang happens it hangs badly, 
because
check_pf is holding a lock during the problematic recvmsg call.

I raised this issue on the libc-help mailing list, but I'm hoping that lkml will
have a higher number of people familiar with netlink that may better offer 
advice.
The original thread is here:
https://sourceware.org/ml/libc-help/2015-09/msg00014.html

Looking at the getaddrinfo / check_pf source code:
https://fossies.org/dox/glibc-2.22/sysdeps_2unix_2sysv_2linux_2check__pf_8c_source.html

146  if (TEMP_FAILURE_RETRY (__sendto (fd, (void *) , sizeof (req), 0,
147  (struct sockaddr *) ,
148  sizeof (nladdr))) < 0)
149goto out_fail;
150 
151  bool done = false;
152 
153  bool seen_ipv4 = false;
154  bool seen_ipv6 = false;
155 
156  do
157  {
158struct msghdr msg =
159{
160  (void *) , sizeof (nladdr),
161  , 1,
162  NULL, 0,
163  0
164};
165 
166  ssize_t read_len = TEMP_FAILURE_RETRY (__recvmsg (fd, , 0));
167  if (read_len <= 0)
168goto out_fail;
169 
170  if (msg.msg_flags & MSG_TRUNC)
171goto out_fail;
172 

I notice that there is possibility that if messages are dropped either on send
or receive side, maybe this code will hang forever?  The netlink(7) man page 
makes
me slightly worried:

> Netlink is not a reliable protocol.  It tries its best to deliver a message 
> to its destination(s), but may drop messages when an out-of-memory  condition 
>  or  other error occurs.
> However,  reliable  transmissions from kernel to user are impossible in any 
> case.  The kernel can't send a netlink message if the socket buffer is full: 
> the message will be dropped and the kernel and the user-space process will no 
> longer have the same view of kernel state.  It is up to the application to 
> detect when  this  happens (via the ENOBUFS error returned by recvmsg(2)) and 
> resynchronize.


I have taken the glibc code and created a simple(r) program to attempt to 
reproduce this issue.
I inserted some simple polling between the sendto and recvmsg calls to make the 
failure case more evident:

  struct pollfd pfd;
  pfd.fd = fd;
  pfd.events = POLLIN;
  pfd.revents = 0;

  int pollresult = poll(, 1, 1000);
  if (pollresult < 0) {
perror("glibc: check_pf: poll");
abort();
  } else if (pollresult == 0 || pfd.revents & POLLIN == 0) {
fprintf(stderr, "[%ld] glibc: check_pf: netlink socket read timeout\n", 
gettid());
abort();
  }

I have placed the full source code and strace output here:
https://gist.github.com/stevenschlansker/6ad46c5ccb22bc4f3473

The process quickly sends off hundreds of threads which sit in a
loop attempting this RTM_GETADDR message exchange.

The code may be compiled as "gcc -o pf_dump -pthread pf_dump.c"

An example invocation that quickly fails:

root@24bf2e440b5e:/# strace -ff -o pfd ./pf_dump 
[3700] exit success
glibc: check_pf: netlink socket read timeout
Aborted (core dumped)

Interestingly, this seems to be very easy to reproduce using pthreads, but much 
less
common with fork() or clone()d threads.  I'm 

Multiple copy-on-write branches of an anonymous memory segment

2013-11-05 Thread Steven Schlansker
Hi,

I am developing a data structure that resides in a large memory buffer.  It is 
important that I be able to
mutate the data structure without interfering with concurrent reads, and then 
swap it in at a convenient time.

There is a set of initial data stored in a file.  I use mmap(2) to map the file 
into memory.  At some later point, I would like to run incremental updates.  
This involves copying the data structure, mutating it, then publishing it.  
Only a very small percentage of the data structure is changed, but it can 
change at random.

Currently, I accomplish this by mapping a new anonymous segment, copying the 
data, and modifying it.  However this is wasteful both in time (copying all 
that data takes a fair amount of time) and memory (most of the data did not 
actually change, but now we have two copies).

I would like to utilize kernel support for copy-on-write to eliminate both of 
these bottlenecks.  Ideally, I would be able to:

1) create a new COW mapping of an existing anonymous mapping that shares pages 
initially and I can write to without affecting the original
2) At the time of creating that new mapping, extend the buffer with 
initially-zero pages (in case I need more space to append to the end of the 
data structure in the new version)

The mmap and mremap documentation get me tantalizingly close — I can resize the 
segment I already have, I can allocate file-backed copy on write segments.  
Also, forking while holding a private mapping seems to provide other bits of 
the functionality (COW of anonymous segments).  But I can’t seem to figure out 
how to fit the puzzle pieces together.  Also, I’m running in the JVM, so 
forking is not a possibility.

There’s a couple of unanswered StackOverflow questions in a similar vein, so I 
hope I’m not just missing something obvious.  The most relevant is:

http://stackoverflow.com/questions/16965505/allocating-copy-on-write-memory-within-a-process

And it looks like this dance is even possible on Mach, with some clever use of 
‘vm_remap'.


I’m hoping that I’m just missing the magic incantation to pass to mmap to 
achieve this behavior.  I wasn’t brave enough to subscribe to LKML just yet, so 
please CC me on replies!

TIA,
Steven Schlansker

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Multiple copy-on-write branches of an anonymous memory segment

2013-11-05 Thread Steven Schlansker
Hi,

I am developing a data structure that resides in a large memory buffer.  It is 
important that I be able to
mutate the data structure without interfering with concurrent reads, and then 
swap it in at a convenient time.

There is a set of initial data stored in a file.  I use mmap(2) to map the file 
into memory.  At some later point, I would like to run incremental updates.  
This involves copying the data structure, mutating it, then publishing it.  
Only a very small percentage of the data structure is changed, but it can 
change at random.

Currently, I accomplish this by mapping a new anonymous segment, copying the 
data, and modifying it.  However this is wasteful both in time (copying all 
that data takes a fair amount of time) and memory (most of the data did not 
actually change, but now we have two copies).

I would like to utilize kernel support for copy-on-write to eliminate both of 
these bottlenecks.  Ideally, I would be able to:

1) create a new COW mapping of an existing anonymous mapping that shares pages 
initially and I can write to without affecting the original
2) At the time of creating that new mapping, extend the buffer with 
initially-zero pages (in case I need more space to append to the end of the 
data structure in the new version)

The mmap and mremap documentation get me tantalizingly close — I can resize the 
segment I already have, I can allocate file-backed copy on write segments.  
Also, forking while holding a private mapping seems to provide other bits of 
the functionality (COW of anonymous segments).  But I can’t seem to figure out 
how to fit the puzzle pieces together.  Also, I’m running in the JVM, so 
forking is not a possibility.

There’s a couple of unanswered StackOverflow questions in a similar vein, so I 
hope I’m not just missing something obvious.  The most relevant is:

http://stackoverflow.com/questions/16965505/allocating-copy-on-write-memory-within-a-process

And it looks like this dance is even possible on Mach, with some clever use of 
‘vm_remap'.


I’m hoping that I’m just missing the magic incantation to pass to mmap to 
achieve this behavior.  I wasn’t brave enough to subscribe to LKML just yet, so 
please CC me on replies!

TIA,
Steven Schlansker

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/