On 05/03/09 13:40, Garrett D'Amore wrote:
> I suspect that the new crossbow architecture might impose additional 
> resource constraints on vlans -- it makes sense that it would create 
> additional rings and worker threads for each vlan.  Anyone from 
> crossbow-discuss want to comment?

we hope to have support for hardware rings assigned to vlans.

Still, beyond the finite number of rings available on the NIC, the dladm 
create-vnic -v or dladm create-vlan
needs to start failing gracefully before reaching a few thousands of 
kernel threads.

CR# R6776630  (vnics/vlans should limit creating more worker threads 
after a certain threshold )
is tracking this mis-behavior

    Kais.

>
>    - Garrett
>
> Saurabh Misra wrote:
>>
>> I'm somewhat sure that there is no leak. Anyway I managed to get 
>> crash dump (had to reinstall OS with larger dump device).
>>
>> Findleak output :-
>>
>> findleaks: thread ffffff014c477740's stack swapped out; false 
>> positives possible
>> findleaks: thread ffffff014c4768e0's stack swapped out; false 
>> positives possible
>> findleaks: thread ffffff014c47e1a0's stack swapped out; false 
>> positives possible
>> CACHE             LEAKED           BUFCTL CALLER
>> ffffff014622b2e0       1 ffffff01506a8510 impl_acc_hdl_alloc+0x34
>> ffffff0146226b20       1 ffffff0150fc6298 impl_acc_hdl_alloc+0x4a
>> ffffff014622a020       1 ffffff014eff4e58 impl_acc_hdl_alloc+0x64
>> ffffff014ed55020       1 ffffff01510db9c8 rootnex_coredma_allochdl+0x5c
>> ffffff014622ab20       1 ffffff015098be70 uhci_polled_create_tw+0x2a
>> ------------------------------------------------------------------------
>>           Total       5 buffers, 3232 bytes
>> bash-3.2#  which does not reveal anything...Another findleak output 
>> when freemem was 0x1
>>
>> CACHE             LEAKED           BUFCTL CALLER
>> ffffff01462342e0       1 ffffff01677fb1e8 allocb+0x64
>> ffffff014eaeeb20       1 ffffff016a595658 cralloc_flags+0x21
>> ffffff0146232b20       1 ffffff01697da048 dblk_constructor+0x3b
>> ffffff014622b2e0       1 ffffff01506fe660 impl_acc_hdl_alloc+0x34
>> ffffff0146226b20       1 ffffff0150a73050 impl_acc_hdl_alloc+0x4a
>> ffffff014622a020       1 ffffff014f02ba48 impl_acc_hdl_alloc+0x64
>> ffffff014ed87020       1 ffffff01510b1aa8 rootnex_coredma_allochdl+0x5c
>> ffffff014622ab20       1 ffffff0150a2d008 uhci_polled_create_tw+0x2a
>> ------------------------------------------------------------------------
>>           Total       8 buffers, 3676 bytes
>>
>> bash-3.2# grep "THREAD:" t.1 | wc -l
>>   24666         // Total number of threads
>>
>> bash-3.2# grep "THREAD: mac_srs_worker()" t.1 | wc -l
>>    8193      // mac_src_worker threads
>> bash-3.2#
>>
>>  PC: _resume_from_idle+0xf1    THREAD: mac_srs_worker()
>>  stack pointer for thread ffffff0004bf0c60: ffffff0004bf0b80
>>  [ ffffff0004bf0b80 _resume_from_idle+0xf1() ]
>>    swtch+0x147()
>>    cv_wait+0x61()
>>    mac_srs_worker+0x1cb()
>>    thread_start+8()
>>
>>
>> bash-3.2# grep "THREAD: mac_soft_ring_worker()" t.1 | wc -l
>>   12291      // Total number of mac_soft_ring_worker() threads.
>>
>>  PC: _resume_from_idle+0xf1    THREAD: mac_soft_ring_worker()
>>  stack pointer for thread ffffff0005e47c60: ffffff0005e47b80
>>  [ ffffff0005e47b80 _resume_from_idle+0xf1() ]
>>    swtch+0x147()
>>    cv_wait+0x61()
>>    mac_soft_ring_worker+0xb0()
>>    thread_start+8()
>>
>> I don't understand why MAC layer has created so many threads. The 
>> vlan test did create 4094 vlan's.
>>
>> From ::kmausers output, which reports current medium and large users 
>> of the kmem allocator
>>
>> bash-3.2# more kma.1
>> 83111936 bytes for 20291 allocations with data size 4096:
>>         kmem_slab_alloc_impl+0x116
>>         kmem_slab_alloc+0xa1
>>         kmem_cache_alloc+0x130
>>         vmem_alloc+0x1bc
>>         segkmem_xalloc+0x94
>>         segkmem_alloc_vn+0xcd
>>         segkmem_alloc+0x24
>>         vmem_xalloc+0x547
>>         vmem_alloc+0x161
>>         kmem_slab_create+0x81
>>         kmem_slab_alloc+0x5b
>>         kmem_cache_alloc+0x130
>>         allocb+0x64
>>         udp_input+0xeee
>>         ip_fanout_udp_conn+0x2b2
>> 33538048 bytes for 4094 allocations with data size 8192:
>>         kmem_slab_alloc_impl+0x116
>>         kmem_slab_alloc+0xa1
>>         kmem_cache_alloc+0x130
>>         kmem_zalloc+0x6a
>>         mac_flow_tab_create+0x65
>>         mac_flow_l2tab_create+0x31
>>         mac_register+0x56f
>>         vnic_dev_create+0x42b
>>         vnic_ioc_create+0x157
>>         drv_ioctl+0x137
>>         cdev_ioctl+0x45
>>         spec_ioctl+0x83
>>         fop_ioctl+0x7b
>>         ioctl+0x18e
>> 28307456 bytes for 6911 allocations with data size 4096:
>>         kmem_slab_alloc_impl+0x116
>>         kmem_slab_alloc+0xa1
>>         kmem_cache_alloc+0x130
>>         vmem_alloc+0x1bc
>>         segkmem_xalloc+0x94
>>         segkmem_alloc_vn+0xcd
>>         segkmem_alloc+0x24
>>         vmem_xalloc+0x547
>>         vmem_alloc+0x161
>>         kmem_slab_create+0x81
>>         kmem_slab_alloc+0x5b
>>         kmem_cache_alloc+0x130
>>         dblk_constructor+0x3b
>>         kmem_cache_alloc_debug+0x249
>>         kmem_cache_alloc+0x164
>> 27232000 bytes for 106375 allocations with data size 256:
>>         kmem_cache_alloc_debug+0x283
>>         kmem_cache_alloc+0x164
>>         allocb+0x64
>>         udp_input+0xeee
>>         ip_fanout_udp_conn+0x2b2
>>         ip_fanout_udp+0xc72
>>         ip_wput_local+0x6ce
>>         ip_multicast_loopback+0x2cb
>>         udp_xmit+0x4a9
>>         udp_send_data+0x3b3
>>         udp_output_v4+0x9c6
>>         udp_send_not_connected+0xeb
>>         udp_send+0x246
>>         so_sendmsg+0x1c7
>>         socket_sendmsg+0x61
>> 27172864 bytes for 6634 allocations with data size 4096:
>>         kmem_slab_alloc_impl+0x116
>>         kmem_slab_alloc+0xa1
>>         kmem_cache_alloc+0x130
>>         vmem_alloc+0x1bc
>>         segkmem_xalloc+0x94
>>         segkmem_alloc_vn+0xcd
>>         segkmem_alloc+0x24
>>         vmem_xalloc+0x547
>>         vmem_alloc+0x161
>>         kmem_slab_create+0x81
>>         kmem_slab_alloc+0x5b
>>         kmem_cache_alloc+0x130
>>         allocb+0x64
>>         allocb_tmpl+0x24
>>         copyb+0x77
>> [.]
>> 324608 bytes for 128 allocations with data size 2536:    // bfe comes 
>> for 128 buffer allocation
>>         kmem_slab_alloc_impl+0x116
>>         kmem_slab_alloc+0xa1
>>         kmem_cache_alloc+0x130
>>         rootnex_coredma_allochdl+0x84
>>         rootnex_dma_allochdl+0x7d
>>         ddi_dma_allochdl+0x35
>>         ddi_dma_alloc_handle+0xb8
>>         bfe_ring_buf_alloc+0x4b
>>         bfe_ring_desc_alloc+0x149
>>         bfe_rings_alloc+0xa6
>>         bfe_attach+0x2d0
>>         devi_attach+0x80
>>         attach_node+0x95
>>         i_ndi_config_node+0xa5
>>         i_ddi_attachchild+0x40
>> [.]
>>
>> Failures were seen on following caches :
>>
>> > ::kmastat
>> cache                        buf    buf    buf    memory     alloc alloc
>> name                        size in use  total    in use   succeed  fail
>> ------------------------- ------ ------ ------ ---------- --------- 
>> -----
>> streams_mblk                  64 683051 683130  66621440B  39257487   
>> 993
>> streams_dblk_16              128   1469   1533    299008B   
>> 1324225     0
>> streams_dblk_80              192  61535  61568  15761408B   
>> 7558613    75
>> streams_dblk_144             256 279841 279852  95522816B   4124673  
>> 1594
>> streams_dblk_208             320    592    600    245760B    
>> 279677     0
>> streams_dblk_272             384   1032   1044    475136B   
>> 2666792    71
>>
>> And all threads stuck in page throttle because of no memory.
>>
>> Is there a way to control creating number of vlan's in NICDRV?
>>
>> I'm thinking of running NICDRV on e1000g and that can tell whether 
>> it's driver's fault or something in MAC layer.
>>
>> Looks like findleaks does not do good job in finding leaks related to 
>> allocb or mblk.
>>
>> cheers,
>>
>
> _______________________________________________
> crossbow-discuss mailing list
> crossbow-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/crossbow-discuss
>


Reply via email to