Am Freitag, den 09.05.2008, 13:58 +0200 schrieb Gabriel Kerneis:
> On Fri, May 09, 2008 at 01:12:06PM +0200, Gerd Stolpmann wrote:
> > Of course, we did not use multithreading very much. We are relying on
> > multi-processing (both "fork"ed style and separately started programs),
> > and multiplexing (i.e. application-driven micro-threading). I especially
> > like the latter: Doing multiplexing in O'Caml is fun, and a substitute
> > for most applications of multithreading. For example, you want to query
> > multiple remote servers in parallel: Very easy with multiplexing,
> > whereas the multithreaded counterpart would quickly run into scalability
> > problems (threads are heavy-weight, and need a lot of resources).
> 
> Do you have any pointer on multiplexing (or some piece of code you could
> show)? This seems interesting but I can't figure out what it looks like.

For some background information look here:

- Asynchronous RPC:
  
http://projects.camlcity.org/projects/dl/ocamlnet-2.2.9/doc/html-main/Rpc_intro.html

  Look for the asynchronous examples

- The low-level side is enlightened here:
  
http://projects.camlcity.org/projects/dl/ocamlnet-2.2.9/doc/html-main/Equeue_intro.html

Here is a code snipped from our production code. It is from a server
that continuously monitors RPC ports:

let check_port esys p =
  (* Checks the port p continuously until mon_until - then the port is deleted
     from the list
   *)
  let (ba,ua) = Lazy.force shmem in
  let rec next_check() =
    let now = Unix.gettimeofday() in
    if now < Int64.to_float p.mon_until then (
      let conn =
        match p.mon_port with
          | `local_port _ ->
              failwith "Monitoring local ports is not supported"
          | `inet4_port a ->
              Rpc_client.Inet(a.inet4_host, a.inet4_port) in
      let rpc_proto =
        match p.mon_protocol with
          | `tcp -> Rpc.Tcp
          | `udp -> Rpc.Udp in
      let client = 
        Capi_clnt.Generic.V1.create_client2
          ~esys
          (`Socket(rpc_proto, conn, Rpc_client.default_socket_config)) in
      Rpc_helpers.set_exn_handler "generic" client;
      Rpc_client.configure client 0 5.0;  (* 5 secs timeout *)
      Capi_clnt.Generic.V1.ping'async
        client
        ()
        (fun get_reply ->
           (* We are called back when the "ping" is replied or times out *)
           let successful = try get_reply(); true with _ -> false in
           Rpc_client.shut_down client;
           p.mon_alive <- successful;
           ba.{ p.mon_shmpos } <- (if successful then 1L else 0L);
           let g = Unixqueue.new_group esys in
           Unixqueue.once esys g 1.0 next_check
        );
    )
    else
      remove_port p
  in
  next_check()

This (single-process) server can watch a high number of ports at the
same time and consumes almost no resources. Note how the loop is
programmed, essentially we have

let rec next_check() =
   ...
   Capi_clnt.Generic.V1.ping'async
     client
     ()
     (fun get_reply ->
        ...
        Unixqueue.once esys g 1.0 next_check
     )

("once" calls the passed functions once in the future, here after 1
second). Basically, you have a closure with the action in the event
queue, and when the action is done, a new closure is appended to this
queue to schedule the next task. Using closures is important because
this enforces that all stack values are copied to the heap (I view
closures in this context as a means to "heapify" values), so the
recursion is stackless.

This server is also interesting because we actually use shared memory
for speeding up the communication path between client and server (look
at the "ba.{ p.mon_shmpos } <- ..." line). The client is here the
program that wants to know whether a port is alive. This is an
optimization for the most frequent case, but works only if client and
server reside on the same node. (Actually, we run this server on every
node, so this is always the case.) Effectively, it is no big problem to
combine shared memory style and multi-processing.

Gerd
-- 
------------------------------------------------------------
Gerd Stolpmann * Viktoriastr. 45 * 64293 Darmstadt * Germany 
[EMAIL PROTECTED]          http://www.gerd-stolpmann.de
Phone: +49-6151-153855                  Fax: +49-6151-997714
------------------------------------------------------------


_______________________________________________
Caml-list mailing list. Subscription management:
http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
Archives: http://caml.inria.fr
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs

Reply via email to