If it helps, I am also experiences crashes in malloc. They did not occur in
0.9.28. Here is a sample backtrace:

Program terminated with signal 11, Segmentation fault.
[New process 1788]
[New process 1752]
[New process 1784]
[New process 1783]
[New process 1789]
[New process 1791]
[New process 1787]
[New process 1785]
[New process 1786]
#0  0x3782e44a in __malloc_from_heap (size=506916, heap=0x37870bd8,
heap_lock=0x37874e3c) at libc/stdlib/malloc/malloc.c:184
184          mem = MALLOC_SETUP (mem, size);
(gdb) bt
#0  0x3782e44a in __malloc_from_heap (size=506916, heap=0x37870bd8,
heap_lock=0x37874e3c) at libc/stdlib/malloc/malloc.c:184
#1  0x3782e53c in malloc (size=506912) at libc/stdlib/malloc/malloc.c:223
#2  0x3782eb10 in memalign (alignment=16, size=506880) at
libc/stdlib/malloc/memalign.c:48
#3  0x08083fd1 in __vout_AllocatePicture (p_this=0x8d74f6c, p_pic=0x9a1a048,
i_chroma=808596553, i_width=704, i_height=480, i_aspect=576000) at
video_output/vout_pictures.c:515
#4  0x373bdacf in video_new_buffer (p_this=0x8d74f6c, pp_ring=0x8893868,
p_sys=0x8885680) at transcode.c:2499
#5  0x08144ed4 in DecodeBlock (p_dec=0x8d74f6c, pp_block=0x3efff89c) at
libmpeg2.c:604
#6  0x373c026d in Send (p_stream=0x887f6c8, id=0x8892bf8,
p_buffer=0xcf0bec0) at transcode.c:2030
#7  0x373e00a6 in Send (p_stream=0x887e32c, id=0x8892bd0,
p_buffer=0xcf0bec0) at duplicate.c:277
#8  0x0809182c in sout_InputSendBuffer (p_input=0x8892bc0,
p_buffer=0xcf0bec0) at stream_output/stream_output.c:279
#9  0x080d31e8 in DecoderDecode (p_dec=0x88a91bc, p_block=0xd232f28) at
input/decoder.c:579
#10 0x080d70ac in EsOutSend (out=0x887b21c, es=0x88a6e08, p_block=0xd232f28)
at input/es_out.c:1107
#11 0x080ffed9 in ParsePES (p_demux=0x8890c64, pid=0x999cf10) at
../../include/vlc_es_out.h:109
#12 0x08101a58 in Demux (p_demux=0x8890c64) at ts.c:1927
#13 0x08073a6f in MainLoop (p_input=0x8882170) at input/input.c:538
#14 0x08074b17 in Run (p_input=0x8882170) at input/input.c:444
#15 0x378b3136 in pthread_start_thread (arg=0x3efffea0) at
libpthread/linuxthreads.old/manager.c:309
#16 0x377c2c12 in clone () at libc/sysdeps/linux/i386/clone.S:106

The crash happens in different parts of the application but it is always a
malloc crash. 


Regards,

-- 
Sergio M. Ammirata, Ph.D.



On 11/24/09 2:16 AM, "Freeman Wang" <[email protected]> wrote:

> Mike
> 
> An example may be better to show what we thought.
> 
> Say, we already have a mmb list of three blocks M1 --> M2 --> M3. Now a
> heap request comes in for an 8k buffer. The heap is extended using mmap
> and we iterate through the list and find the new block descriptor,
> new_mmb, should be added after M3. *** Now we try to allocate new_mmb
> from the mmb_heap and find mmb_heap needs to be extended too *** So a
> new mmap syscall is made to extend the mmb_heap, and again the new block
> needs a descriptor also from the mmb_heap. Again we iterated through the
> existing list, and find this new_mmb_2 should be added after M3 too !!!.
> We then try to allocate new_mmb_2 and it should succeed because the mmap
> usually gives us at least a page and it has been added to the mmb_heap.
> When the allocation of the first new_mmb returns, the list has already
> been updated to M1 --> M2 --> M3 --> M4_2, but we do not know, so M4_1
> will be still added after M3.
> 
> That's just one of the possible ways the current code could mess up.
> Depending on where the two blocks are located, things could go wild and
> the link list be totally destroyed. However, if we make the allocation
> before going through the mmb list, we will be able to make sure the
> new_mmb structure is added properly.
> 
> Freeman
> 
> -----Original Message-----
> From: Mike Frysinger [mailto:[email protected]]
> Sent: Monday, November 23, 2009 7:42 PM
> To: [email protected]
> Cc: Freeman Wang
> Subject: Re: bugs in malloc
> 
> On Monday 23 November 2009 14:55:26 Freeman Wang wrote:
>> 1. We found with some special application, the application would get
>> stuck at line 162 of malloc.c and the reason was mem->next points back
> 
>> to itself.
> 
> please try to reduce the allocation patterns of your 'special'
> application.  
> it should be easy to enable debugging and capture the malloc/free
> sequences and run them again manually.
> 
>> It turns out, we believe, to be because new_mmb is allocated after the
> 
>> mmb list is iterated throught to find the insertion point. When the
>> mmb_heap also runs out and needs to be extended when the regular heap
>> is just extended, the mmb list could be messed up. We moved the
>> new_mmb allocation up and the problem seems to have been fixed.
> 
> i dont see why the current code is a problem.  it's a singly linked list
> which means if the list is walked to the end, the new_mmb will be
> 'inserted' as the last item in the linked list.  prev_mmb points to the
> last valid entry in the list and mmb is null.  so the last valid entry
> will be updated to point to new_mmb and it will have its next member set
> to null.  i dont see any place where the mmb list 'could be messed up'.
> 
> if you look a few lines up, the recursive memory-full-issue should
> already be handled because a mmap for more memory is done, and that mmap
> is put into the heap by the heap free call.
> 
>> 2. While trying to fix the above issue, we read the code and found a
>> multi-threading issue with this mmb list handling. Th'is list is
>> halfway protected in free.c and not protected by any lock at all in
>> malloc.c. Is it intentional?
> 
> looks like the locking fixes we have in the blackfin tree werent pushed
> upstream.  i'll have to rebase them first, but it should at least
> partially cover what you see.  if it doesnt, i'll stitch in your pieces.
> 
>> 3. In an embedded world without MMU, it is not garanteed that the mmap
> 
>> syscall would always get back a valid block, and that's probably why
>> the return value, block, is checked immediately after the syscall. But
> 
>> it seems we are not checking the return value of new_mmb which is
>> allocated from the mmb_heap? Is it a potential issue?
> 
> you have no guarantee of mmap returning valid memory under a mmu-system
> either.  typically an oom situation will have an application crash
> quickly, so this particular missing check isnt a big deal, but it should
> probably still be added.  i imagine in a threaded situation, one thread
> could grab the fresh memory before the original thread got a chance to
> use it and thus got null back.
> -mike
> _______________________________________________
> uClibc mailing list
> [email protected]
> http://lists.busybox.net/mailman/listinfo/uclibc


_______________________________________________
uClibc mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/uclibc

Reply via email to