On 2/4/19 7:55 PM, Jan Kiszka wrote: >> - * Release the lock while copying the data to >> - * keep latency low. >> + * We have to drop the lock while reading in >> + * data, but we can't rollback on bad read >> + * from user because some other thread might >> + * have populated the memory ahead of our >> + * write slot already: bluntly clear the >> + * unavailable bytes on copy error. >> */ >> cobalt_atomic_leave(s); >> - ret = xnbufd_copy_to_kmem(rsk->bufmem + wroff, bufd, n); >> - if (ret < 0) >> - return ret; >> + xret = xnbufd_copy_to_kmem(rsk->bufmem + wroff, bufd, n); >> cobalt_atomic_enter(s); >> - /* >> - * In case we were preempted while copying the >> - * message, we have to write the whole thing >> - * again. >> - */ >> - if (rsk->wrtoken != wrtoken) { >> - xnbufd_reset(bufd); >> - goto redo; >> + if (xret < 0) { >> + memset(rsk->bufmem + wroff + n - xret, 0, xret); > > This looks fishy, to the compiler and also to me. > Oh, well. Paste© blunder, the fix originates from a distinct co-kernel implementation. I'll resubmit.
-- Philippe.