[
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15856797#comment-15856797
]
Daryn Sharp commented on HADOOP-13836:
--------------------------------------
I understand the difficulties of handling ssl partial reads/writes, reads
wanting to write, vice versa. I'm interested in this feature but these issues
I outlined are blockers - no nio pun intended. :)
{quote}
bq. Multi-threaded clients generating requests faster than read will
indefinitely tie up a reader
I am not sure if it gets indefinitely tied up, but they will get processed
eventually.
{quote}
Yes, maybe, probably, but it's classic indefinite postponement which is not
acceptable.
{quote}
bq. Clients sending a slow trickle of bytes will tie up a reader until a
request is fully read.
This is a problem that exists still today, when large data packets are sent and
we use ChannelIO on the server to process this.
{quote}
Incorrect. ChannelIO does loop using a nio optimal buffer size, but will
read/write at most 1 call or until the non-blocking op returns less than a full
buffer.
{quote}
bq. Clients stalled mid-request will cause the reader to go into a spin loop.
The connection timeout on the stalled clients, would lead to closure of channel
and the spin loop breaks
{quote}
There's no acceptable justification for a spin loop...
bq. Note that SSL over the current protocol is not wire-compatible anyway, I
would argue that it might make sense to build a new protocol on top of HTTP/2
and to leverage great implementation available today (e.g., Netty 4.1 / gRPC).
[~wheat9] Given that EZ has lower performance impact, I do agree something is
very amiss. [~kartheek], please use a profiler to check for a hot spot or
highly contended sync point. It may be correlated with increased object
allocation/copying levels causing an increase in young gen gc frequency.
Unfortunately I have not seen good benchmarks for java gRPC. Given the
atrocious garbage generation rates of PB and guava, I have low confidence gRPC
would be performant. Webhdfs is the poster child for the horrors of a java
REST protocol at scale. Even after all my attempts to tame webhdfs, even when
capped with iptables to 5-10k connections max, a flood of perhaps ~10k ops/sec
will blow up the heap and cause a full gc or come dangerously close. For
comparison, we can now handle storms of rpc call rates exceeding 100k/sec.
> Securing Hadoop RPC using SSL
> -----------------------------
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
> Issue Type: New Feature
> Components: ipc
> Reporter: kartheek muthyala
> Assignee: kartheek muthyala
> Attachments: HADOOP-13836.patch, HADOOP-13836-v2.patch,
> HADOOP-13836-v3.patch, HADOOP-13836-v4.patch, Secure IPC OSS Proposal-1.pdf,
> SecureIPC Performance Analysis-OSS.pdf
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication &
> Security Layer (SASL), with the Kerberos ticket based authentication or
> Digest-md5 checksum based authentication protocols. This proposal is about
> enhancing this cipher suite with SSL/TLS based encryption and authentication.
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that
> provides data security and integrity across two different end points in a
> network. This protocol has made its way to a number of applications such as
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting
> this cipher suite at the core of Hadoop would give a good synergy with the
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes
> of communication
> 1. Plain
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]