> Care to explain why you can't discuss how the API might or might not work
> without throwing around gratuitous insults?

They are warnings, not insults. I'm sorry you see them that way.

> This last message to which
> I'm responding is merely condescending; the previous was downright
> insulting and offensive.  I can't see how that helps anyone.  And after
> all, it's not like the current mess of a non-blocking API is your design
> nor your code, at least not as far as I can tell.

You are reporting bugs in an API you do not understand. You are using an API
that you fundamentally do not understand. I'm simply pointing this out and
warning you that this is not a good thing to do.

> The problem with "read until there's no more" semantics is that you can't
> really use them to do fair I/O in a traditional Unix single-threaded
> event-driven model.

Well, you can't be fair in a traditional Unix single-threaded event-driven
model anyway. If one client, for example, triggers a rare unusual condition
that your code has not handled before, and the code to handle it needs to
fault in, all clients have to wait while that happens. If the local disk is
busy, well, they all sit there.

> If I have no way to find out whether there might be
> more I/O available to drain except to try to drain it, I have to either:
>
> 1) Service each client in turn whether or not they're ready, since I can't
>    tell whether they're ready without paying the whole cost of the read()
>    and the decryption
>
> or
>
> 2) Fully drain everything each client might have cared to write me each
>    time I find that he's ready at all.  This allows a single client to
>    consume as much of a server's time as it cares to, to the detriment
>    of the others.

If you want a limit on how much data you read from a single client, just
read only up to that limit. You'd have the same issue with 'select'. When
'select' tells you that there's data from a client, you have no idea how
much, and the only way to tell is to read it.

> The traditional Unix non-blocking semantics where you can stop reading
> whenever you like and sleep (via select or poll on _many_ streams at once)
> to find out who's got more for you don't have this problem.  So many, many
> event-driven Unix applications are written to do precisely that.

I still don't see what the problem is. In either case, if you don't fully
drain the connection, you have to come back and read the rest later. If you
want to call 'poll' or 'select' to discover more sockets, you can.

The only difference is that in one case, you can use 'poll' or 'select' to
rediscover the sockets you already discovered and in the other case you have
to keep track. But this is as simple as an extra 'or' clause in the 'if' of
your poll/select loop.

Instead of 'if the poll/select discovered this socket' then read, it's 'if
the poll/select discovered this socket or I wasn't waiting for it to be
discovered' then read.

> If I'm hearing you correctly you are saying that not only cannot one do
> that with OpenSSL, one ought not want to do such a thing.  I do not
> grasp why.

Because single-threaded, poll/select loop applications are one of the
poorest design architectures there can be. If any line of code anywhere
unexpectedly blocks, the entire server is toast. This means that not only
the 20% of the code that's really performance critical has to be designed
carefully, but even the parts that shouldn't be performance critical
*SURPRISE* are, because any unexpected blocking kills the server. (This is
why most IRC servers are so bursty, by the way.)

> Incidentally, if one's not intended to peek under the hood, again, I ask
> why OpenSSL _encourages_ this by providing no sleep-for-IO mechanism
> which does not, in fact, _require_ peeking under the hood.

OpenSSL tells you when to sleep for I/O. You should not sleep for I/O unless
told because you have no way to know whether or not sleeping for I/O is
appropriate.

> Furthermore, the SSL_read() documentation, which I was so foolish as to
> use as my guidance to this portion of the API, explicitly says to use
> select() to find out if I/O's available after receiving a WANT_READ or
> WANT_WRITE error.  You _cannot do that_ without peeking under the hood,
> because you of course must break the abstraction and get the BIO's
> file descriptor to feed to select!

This is where OpenSSL explicitly tells you -- you must peek under the hood.
The point is that you cannot know when to do this. OpenSSL has to tell you.
You are right though, that's the wrinkle.

> I can think of a number of ways to support applications that would like
> to do something like typical Unix event-driven multiplexed I/O by
> augmenting, rather than altering, the existing API.  But if they're
> just going to be met by a barrage of condescension or insults, I'm not
> sure it's worth bothering to discuss...

The problem wish some kind of SSL_poll or SSL_select is that it doesn't
actually solve the problem. Suppose instead of just using OpenSSL, you were
also using OpenFOO. Now your thread can't block in SSL_select because it
needs to block in FOO_select to also properly handle FOO protocol sockets.

So what you'd wind up doing is writing your own code that checks all the SSL
sockets to see which it is appropriate to 'select' on, checks all the FOO
sockets to see which it is appropriate to 'select' on, and then calls
'select' on the mixed combo for which it's appropriate, then sends a
combined report.

But that's exactly what you should, and can, do now. This is the case where
you have to peek under the hood, and you can't make that go away. At least,
not any way that I can think of.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to