On 07/08/14 16:15, Les Mikesell wrote:
On Thu, Aug 7, 2014 at 4:56 AM, David Sommerseth
<openvpn.l...@topphemmelig.net> wrote:
However, that is most likely less intrusive and complex than to
basically needing to re-write the event handler which schedules that
each client gets their "time slice" in OpenVPN.  OpenVPN's event
handling the only thing which makes OpenVPN tackle more clients at the
same time, inside the same process/thread.
I don't see why you'd need to change that at all.  Let the parent
process continue to handle all of the client connections, and just add
a socket to the child process(es) into the event loop.  Then instead
of recomputing the keys in the parent, send that work over the socket
to the child, which, being a fork, already has the same event handler.
I think the only extra complexity would be having to track 'work
pending' connection states until the child returns the completed work.

If I were to add multi-core support to OpenVPN I would start with the Apache httpd 1.3 or 2.x code base (1.3 is simpler as it does not include apache's MPM stuff). Httpd + mod_ssl has already solved the issue of accepting multiple client connections and should/will have similar issues with key renegotation.

I would also opt for function handlers/pointers per connection - that way you could server both udp+tcp from a single server instance - the client connection entry element would then contain pointers to the right handlers for tcp, udp and possibly even tun or tap.

JM2CW,

JJK

The OpenVPN implementation is also quite different from your apache
pre-fork suggestion, where the connection to a web server is closed
after having served a simple request with limited amount of data.
Agreed - I wouldn't have the child processes accept any connections.
The similarity would only be in managing a pool of worker processes.
But for simplicity consider just one forked child where you hand off
rekeying.  You'd probably really want a pool of connections to
workers, but even one would double the CPU power available and avoids
the complexity of balancing a pool.

Likewise, if you have 100
clients connecting 10 times retrieving data in parallel, this also is
a stressful moment for the web-server.  This doesn't happen in
OpenVPN, because each client gets a "time slice" before OpenVPN serves
others again.
Right, but if you keep that same logic but fork the process and push
the packets that involve a lot of CPU work to a different instance you
get time slices out of multiple CPUs instead of just one.

The complexity of implementing a pre-forked model would actually be
far more complex than the alternative.  And in addition, there is a
need for a SSL state manager, which keeps tracks of the SSL keying
material for each client, no matter which approach is used.  OpenVPN
does already have such a state manager, but it's fairly simple because
it only needs to process each request one client by one client.
A forked copy would automatically have the same management code...
And you'd have the option of either passing any needed state over the
socket between processes or using explicitly shared memory and some
sort of lock to arbitrate access- which you'd need with threads
anyway.   There would be some extra overhead in passing things over
the sockets, but you might have 2 to 32x the CPU power to do it.



Reply via email to