Weird behavior with writes to a unix socket pair
Hi, I've been testing the effects of the SO_RCVBUF/SO_SNDBUF socket options on unix sockets on various platforms, and I've run into some curious unix socket behavior on Cygwin (independent of the SO_RCVBUF/SO_SNDBUF options). This piece of code should get blocked in one of the writes (and it does so for regular pipes): #include #include #include int main() { int p[2]; socketpair(AF_LOCAL, SOCK_STREAM, 0,p); /*pipe(p);*/ /*setsockopt(p[0], SOL_SOCKET, SO_RCVBUF, &(int){1},sizeof(int));*/ /*setsockopt(p[0], SOL_SOCKET, SO_SNDBUF, &(int){1},sizeof(int));*/ /*setsockopt(p[1], SOL_SOCKET, SO_RCVBUF, &(int){1},sizeof(int));*/ /*setsockopt(p[1], SOL_SOCKET, SO_SNDBUF, &(int){1},sizeof(int));*/ ssize_t n; for(unsigned j=0;j<1;j++) for(unsigned char i=1; i!=0; i++){ printf("%u\n", j*256+i); if(0>(n=write(p[1],,1))) perror("write"); } } but with a unix socket socket pair a first couple of writes proceed quickly and then the writes continue to proceed with ever-increasing intervals between them. I don't see why it should behave like this and I think you guys might want to look into why it does. Best regards, Petr Skocik -- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
malloc(0) crashing with SIGABRT
There's been a twitter discussion on how different POSIX platforms handle malloc(0): https://twitter.com/sortiecat/status/1170697927804817412 . As for Cygwin, the answer appears to be "not well", but this should be easy to fix. Best regards, Petr Skocik -- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
malloc(0) crashes with SIGABRT
https://twitter.com/sortiecat/status/1170697927804817412 -- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
bind() behavior inconsistency with Linux and MacOS
Hi. I don't know if this is technically a bug, but I've noticed that unlike on Linux or MacOS, I a cannot bind a unix domain socket in a child process and then listen on it in the parent. The bind succeeds but `listen()` in the parent then fails with EINVAL. (The reason I'd like to `bind` in a different process is so I could `chdir` to a different directory before `bind`ing and then conceptually back without having the `chdir` break file operations in different threads.) Just wanted to let you guys know in case there was an easy fix. Best regards, Petr Skocik Example code: #include #include #include #include #include #include #include #include #include #include int main(int C, char **V) { int sfd; if(0>(sfd=socket(AF_UNIX, SOCK_STREAM, 0),1)) return perror("socket"),1; struct sockaddr_un uaddr; uaddr.sun_family = AF_UNIX; char const *nm = "FOO.sock"; size_t nmlen = nm[0]?strlen(nm):(1+strlen([1])); size_t actlen = nmlen <= sizeof(uaddr.sun_path)-1 ? nmlen : sizeof(uaddr.sun_path)-1; memcpy(_path[0],nm,actlen); uaddr.sun_path[actlen]='\0'; size_t socklen = 0 ? sizeof(struct sockaddr_un) : offsetof(struct sockaddr_un, sun_path) + actlen + 1; fflush(stdout); pid_t pid=0; int fork_eh = atoi(V[1]?V[1]:"1"); if(fork_eh) if(0>(pid=fork())) return perror("fork"),1; if(0==pid){ printf("pid=%d binding\n", (int)getpid()); if(0>bind(sfd, (struct sockaddr*), (socklen_t)+socklen)) perror("bind"),_exit(1); printf("pid=%d bound\n", (int)getpid()); fflush(stdout); if(fork_eh) _exit(0); } wait(0); if(0>listen(sfd,INT_MAX)) return perror("listen"),1; printf("pid=%d listening\n", (int)getpid()); } -- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Re: uc_sigmask set in a sigaction signal handler not honored
> On Apr 3 14:15, Corinna Vinschen wrote: > > On Apr 3 11:27, Petr Skočík wrote: > > > Hi. Correct me if I'm wrong but POSIX appears to define > > > > > > https://pubs.opengroup.org/onlinepubs/7908799/xsh/ucontext.h.html > > > > > > as, among other things, containing the field: > > > > > > sigset_tuc_sigmask the set of signals that are blocked when this > > > context is active > > > > > > and it also specifies that the third argument to a .sa_sigaction > > > signal handler is a ucontext_t* cast to void*. > > > > > > So it should follow that doing > > > > > > void act(int Sig, siginfo_t *Info, void *Uctx) > > > { > > > ucontext_t *uctx = Uctx; > > > sigfillset(>uc_sigmask); > > > } > > > > > > from a signal handler should alter the signal mask of the thread the > > > signal ran on. > > > > > > This is how Linux and MacOS behave, but not CygWin, as the following > > > program shows: > > > > What you're asking for is really complicated. > > > > The context given to act is the context at the time the signal function > > is called. In Cygwin (lower case w) this is a copy of the context. > > > > sigfillset() has not the faintest clue where this context comes from, it > > just sets the signal mask value without taking any further action. > > > > There are no provisions to control if the called function changes the > > context, other than via setcontext / swapcontext, and I don't see that > > POSIX requires anything else. Both functions change the current > > thread's sigmask according to the value of uc_sigmask. > > Or maybe I'm just dumb. Would it suffice if the thread's signal mask is > changed to uc_sigmask when the signal function returns? > > > Corinna Thanks for the feedback. > Would it suffice if the thread's signal mask is > changed to uc_sigmask when the signal function returns? That is the idea. It's what Linux and MacOS do. Petr Skocik -- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
uc_sigmask set in a sigaction signal handler not honored
Hi. Correct me if I'm wrong but POSIX appears to define https://pubs.opengroup.org/onlinepubs/7908799/xsh/ucontext.h.html as, among other things, containing the field: sigset_tuc_sigmask the set of signals that are blocked when this context is active and it also specifies that the third argument to a .sa_sigaction signal handler is a ucontext_t* cast to void*. So it should follow that doing void act(int Sig, siginfo_t *Info, void *Uctx) { ucontext_t *uctx = Uctx; sigfillset(>uc_sigmask); } from a signal handler should alter the signal mask of the thread the signal ran on. This is how Linux and MacOS behave, but not CygWin, as the following program shows: #include #include #include #include #include void prmask(void) { sigset_t mask; pthread_sigmask(SIG_SETMASK,0,); for(int i=1; i<=64; i++){ printf("%d", sigismember(,i)); } puts(""); } void act(int Sig, siginfo_t *Info, void *Uctx) { ucontext_t *uctx = Uctx; sigfillset(>uc_sigmask); } int main() { struct sigaction sa; sa.sa_sigaction = act; sa.sa_flags = SA_SIGINFO; sigfillset(_mask); prmask(); sigaction(SIGINT,,0); sigaction(SIGALRM,,0); if(1) setitimer(ITIMER_REAL,&(struct itimerval){.it_value={.tv_usec=1}},0); pause(); prmask(); } I think this is a bug, so I'm reporting it. Do you think it can be fixed in the near future? Best regards, Petr Skocik -- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple