On 4/6/16 0:05 , Kilian Ries wrote:
> -----------------  lwp# 85 / thread# 85  --------------------
> 
>  ffffdf7fff29f0ea _lwp_kill () + a
> 
>  ffffdf7fff2338f0 raise (6) + 20
> 
>  ffffdf7fff20db78 abort () + 98
> 
>  000000000054b172 qemu_oom_check (0) + 49
> 
>  000000000054b1ab qemu_memalign (200, 7e0000) + 33
> 
>  0000000000508a5d qemu_blockalign (f9dc70, 7e0000) + 4f
> 
>  000000000050c485 handle_aiocb_rw (9c51b5570) + c2
> 
>  000000000050c770 aio_thread (0) + 166
> 
>  ffffdf7fff297b5a _thrp_setup (ffffdf7fff079240) + 8a
> 
>  ffffdf7fff297e70 _lwp_start ()

So based on this thread I think I have an idea of what's happening and
an idea of how to solve it.

Originally we didn't have preadv / pwritev in illumos and then when we
initially added it, the amount of IOVECS we used was variable and QEMU
didn't really respect IOVEC_MAX. Now, this matters because what QEMU
appears to be doing here is saying because it has an I/O vector that it
can't send, it's going to go ahead and try to basically allocate a large
amount of memory to make it all one contiguous amount that it can send.

So, in this case I think what we can do is actually release the preadv /
pwritev restrictions that came into place originally. This has the
advantage that it should reduce the burden of memory allocation on qemu
and thus speed up a bit of the I/O processing.

If I were able to produce a platform or a QEMU binary to test this
against, would you be in a position to run this again, given that it
seems to reproduce fairly frequently for you? It might be a couple days
before I could get around to that.

Robert



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to