Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-204105491
This is a little unexpected; I'd expect that if there isn't enough buffer
space in the `WritableByteChannel`, you'd get a short write and that's it. The
code already takes care of that by keeping track of how many bytes were
written, and a quick look at the netty code shows it does the same (it does
spin a few times, by default 16, calling `transferTo` in a loop to see if it
makes progress).
Do you know where that is breaking? Are we maybe failing to set some flag
somewhere that properly configures the channels as non-blocking? Or is this an
issue with the underlying `WritableByteChannel` that ends up used (and do you
know what that implementation is)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]