HI,

I have just started utilizing boost fibers and am completely confused about its 
conditional variable concurrency behavior.
Boost fibers provide the following example for using conditional variables:

void wait_for_data_to_process() {
{
std::unique_lock< boost::fibers::mutex > lk( mtx);
while ( ! data_ready) {
cond.wait( lk);
}
} // release lk
process_data();
}

void prepare_data_for_processing() {
retrieve_data();
prepare_data();
{
std::unique_lock< boost::fibers::mutex > lk( mtx);
data_ready = true;
}
cond.notify_one();
}

But when we do the above we see that the fibers end up running into 
std::unique_lock waits in system calls sched_yield and futex. We are possibly 
invoking above concurrency primitives among multiple OS threads. We want to 
avoid system calls in the fibers.

strace outputs are as below:
futex(0x7ff4f4005160, FUTEX_WAKE_PRIVATE, 1) = 0
> /usr/lib64/libc.so.6(__GI___lll_lock_wake+0x17) [0x87777]
> /usr/lib64/libc.so.6(__GI___pthread_mutex_unlock_usercnt+0x94) [0x8f2d4]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::algo::round_robin::notify()+0x24)
>  [0x73f4]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::context::wake(unsigned 
> long)+0x29) [0xaac9]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::wait_queue::notify_one()+0x40)
>  [0xbca0]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::mutex::unlock()+0x31) 
> [0xc151]

sched_yield() = 0
> /usr/lib64/libc.so.6(__sched_yield+0xb) [0xf3d5b]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::detail::spinlock_ttas::lock()+0xd4)
>  [0x9044]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::mutex::lock()+0x28) 
> [0xc198]

Apparently even scheduling a fiber seems to run into lock waits, not sure why 
scheduling a fiber would block:
sched_yield() = 0
> /usr/lib64/libc.so.6(__sched_yield+0xb) [0xf3d5b]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::detail::spinlock_ttas::lock()+0xd4)
>  [0x9044]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::scheduler::remote_ready2ready_()+0x33)
>  [0xd353]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::scheduler::dispatch()+0x50)
>  [0xd4e0]
> /usr/local/lib/libboost_fiber.so.1.85.0(boost::fibers::dispatcher_context::run_(boost::context::fiber&&)+0xd)
>  [0xb46d]
> /usr/local/lib/libboost_fiber.so.1.85.0(void 
> boost::context::detail::fiber_entry<boost::context::detail::fiber_record<boost::context::fiber,
>  boost::fibers::stack_allocator_wrapper, std::_Bind<boost::context::fiber 
> (boost::fibers::dispatcher_context::*(boost::fibers::dispatcher_context*, 
> std::_Placeholder<1>))(boost::context::fiber&&)> > 
> >(boost::context::detail::transfer_t)+0x3a) [0xb4ca]
> /usr/local/lib/libboost_context.so.1.85.0(make_fcontext+0x2e) [0x117e]
> No DWARF information found

We want to avoid all system calls in the fibers to avoid their performance 
impact, we are confused why the examples provided by boost fibers are running 
into system calls.

Any help is greatly appreciated ...
regards
-Amit
_______________________________________________
Boost-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://lists.boost.org/mailman3/lists/boost-users.lists.boost.org/
Archived at: 
https://lists.boost.org/archives/list/[email protected]/message/MVWWNYC6MIQYGJMAFGM35Y3ST7VXHSHV/
 

Reply via email to