On Friday, February 23, 2018 at 9:52:39 AM UTC-6, Ben Noordhuis wrote:
>
> On Fri, Feb 23, 2018 at 2:48 AM, CM <crusad...@gmail.com <javascript:>> 
> wrote: 
> > Q1. ... shouldn't these static variables 
> > be replaced with C equivalent of std::atomic<...>? 
>
> Libuv is a C89 code base and can't use anything from stdatomic.h. 
>
 

C89 doesn't have a concept of concurrency so you can't even really 
> talk of whether it's UB or not.  It comes down to what the compilers 
> and architectures we support do and I feel reasonably confident saying 
> hand-rolled atomic ops won't make a practical difference. 
>

> If you're not convinced, think about what the CPU or compiler could 
> realistically do that would make a difference.  Worst case, libuv 
> makes a few unnecessary system calls until the store is settled (and 
> in practice system calls are pretty effective memory barriers.)
>

I know this will work on all most popular platforms and using proper 
barriers isn't going to have any noticeable impact. Just pointing out that 
it is strictly speaking incorrect in C++ world -- and therefore probably in 
C world too.


> Another consideration is possibility of 'false negative'. For example, in 
> > the link below if invalid mask somehow gets passed and open() returns 
> EINVAL 
> > -- it will activate the flag that will stay on until process exits. Not 
> sure 
> > if open() can return EINVAL on bad 'path'. 
>
> uv__open_cloexec() is only used internally with known-good flags.  If 
> you can get libuv to erroneously conclude that O_CLOEXEC isn't 
> available when it is, please file a bug.
>

Yes, I noticed uv__open_cloexec() is used only internally, but I haven't 
checked every place this idiom is used -- there might be a case when 
user-supplied data can "break through" and trigger the flag.
 

> Q2. libuv tries to deal with EMFILE errors in accept() by closing 
> "reserve 
> > fd", accepting connection, closing it and reopening reserve fd again. It 
> > isn't guaranteed to work in multithreaded env. What is the 
> recommendation 
> > for client's code to deal with EMFILE error that may bubble up? 
>
> Increase the fd limit and try again. :-)
>

then it probably makes sense to call exit() instead of fiddling with 
reserve fd ;-)

 

> > As I see it I can: 
> > - stop listening for a while and try uv_accept()'ing again. But then 
> there 
> > is no point in having (mandatory) "EMFILE-handling" trick -- I am 
> dealing 
> > with this myself anyway, no point in wasting one fd per eventloop 
> > - tear down listening socket and try to restart it repeatedly (with some 
> > delay) until it works -- again, no point in having that trick built in 
> and 
> > mandatory 
> > 
> > maybe it makes sense to make that trick optional? Unless there is a 
> better 
> > way for handling this situation... 
>
> I think you may be misunderstanding what libuv tries to achieve.  It's 
> a best-effort attempt to prevent busy-looping where: 
>
> 1. The listen socket repeatedly wakes up because there are pending 
> connections, but 
> 2. Said connections can't be accepted because the file descriptor 
> limit has been reached. 
>

Yes, this is precisely how I understand it. My point was that even though 
this effort is good for majority of apps (that traditionally don't care 
about malloc() returning NULL or running out of fds) -- for those apps that 
do care this mechanism gets in a way. That app may want to keep track of 
connections closed this way, or take some other approach -- it can't do 
that because that mechanism can't be switched off.

I also hoped someone knows a good way to handle this gracefully...


> Q3. EMFILE handling trick employs "reserve" loop->emfile_fd which is 
> > populated by uv__open_cloexec("/", O_RDONLY) or 
> > uv__open_cloexec("/dev/null", O_RDONLY). Isn't it a blocking operation? 
> Why 
> > not dup(STDOUT) instead? 
>
> The file descriptor may not be open and even if it is, the UNIX 
> semantics of tty file descriptors are such that you don't want to keep 
> them open unnecessarily. 
>

Can you elaborate a bit? I am not familiar with finer points of tty 
behavior.
 

Apropos blocking operation: in a technical sense, yes; in a practical 
> sense, no.  /dev/null is not an on-disk entity and neither is / in 
> that its inode data is effectively always in memory. 
>
 
I.e. we are making an assumption here which is proven to work for Linux (as 
of now), but isn't guaranteed in general case.

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to libuv+unsubscr...@googlegroups.com.
To post to this group, send email to libuv@googlegroups.com.
Visit this group at https://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to