More information on our progress on this issue. We can discuss in
further detail at today's chat if desired.

(For purposes of this discussion, it should be assumed that we do not
have the option of reducing/distributing the size/number of procs that
our server must have access to. - but that doesn't mean it isn't also
continually evaluated :) )

We are experimenting with the introduction of an optional 'lazy proc
definition' capability.  This would be a new configuration parameter for
AOLserver 4.0. At this moment it is at the process level. (depending on
time permitting will dictate whether it gets moved to the virtual server
level)

The default value is false. If the value is false, server behavior is
exactly as it was prior to the introduction of this value.

ns/parameters
     ns_param lazyprocdef  true

When AOLserver initializes, it creates its 'init script' which contains
all the variables, procs, etc. which will be loaded into any tcl
interpreter. On one of our servers, where over 9900 procs are sourced,
this init script is over 6.5M in size. The result is that every thread
requires the allocation of this memory (and then some), plus the
evaluation of all that tcl. This led to lengthy thread initialization
times as well as memory allocation lock contention (which further
degraded performance).

When 'lazyprocdef' is set to true, we do not put the procs in the init
script (all other tcl commands (e.g. variables) will still be there).
Instead, we stash the procs in a centralized store. When tcl tries to
execute a command that it doesn't know about - it calls the tcl
'unknown' command. We have wrapped the tcl unknown command with our own
which will first look for the command in our central store and load it
(if it isn't there, the tcl 'unknown' command is processed as before).
This allows the threads to only load the procs they actually use -
improving performance and reducing memory consumption.

The results have been dramatic. Thread initialization times dropped from
   4-5 seconds to ~100 milliseconds. The overall process size is a 1/3
of what it was.

The effect on request times is not yet known (but we will be looking
very closely at that). In theory, there could be a degradation on the
first request to a connection thread (vs. a request to a connection
thread that was 'warmed up' at server startup time). As well as locking
contention on the central store.

The hardest part is dealing with namespaces, the 'info' commands, and
ns_eval. At this point, we *think* we have them addressed.

For 'info', we wrap the info command. On info commands or info procs, we
  merge the results of our stored procs and the results of tcl info. For
commands against a specific proc (e.g. info args), we load the proc, and
then just call tcl 'info'.

To handle ns_eval (which is indeed evil), we must create a copy of the
nsv arrays for each ictl epoch. To do proper garbage collection we need
to keep track of reference pointers so we can dump the older ones once
they are no longer referenced.

At this time, we've got the whole thing coded in nsd/init.tcl using nsv
arrays as the proc store. Future optimizations could move some of this
into more efficient C implementations.

If all goes well, we hope to have this ready for incorporation into a
beta12 release in the next few days...

Cheers,
-Elizabeth

Elizabeth Thomas wrote:

 > We are currently pursuing a very encouraging approach of adding an
 > optional 'lazy proc definition' capability, capitalizing on the
 > 'unknown' processing of tcl. (Thanks to Jeff Hobbes for putting us on
 > this path). Since most of our threads use a relatively small subset of
 > all available procs, we hope to achieve significant performance and
 > memory consumption wins by only loading procs in the interpeter that are
 > actually needed.
 >
 > More details to come early next week.
 >
 > -Elizabeth
 >
 > Dossy wrote:
 >
 > > On 2003.08.13, Elizabeth Thomas <[EMAIL PROTECTED]> wrote:
 > > > We have a server that loads in a great deal of tcl - so much so that
 > > the
 > > > resulting init script is 6.6M in size and contains over 9900 procs.
 > > > [...]
 > > >
 > > > 1. Is there any way to reduce the time to initialize an interp?
 > > > (besides the obvious, but not necessarily feasible, option of
 > reducing
 > > > the procs that are loaded). Has anyone seen similar behavior and have
 > > > some insight into it?
 > >
 > > Is there any chance to do further profiling?  Where are we losing most
 > > of our time?
 > >
 > > I imagine it has a lot to do with allocating 6.6M of memory for the
 > init
 > > script, filling it with the script, then telling Tcl to go parse and
 > > execute it.
 > >
 > > I think the biggest win would be to try and figure out how to get
 > Tcl to
 > > bytecode compile the init script, then push that bytecode from the
 > > master interp into the new slave interps as they get created.  Cut out
 > > the entire parse/bytecode compile steps.  Perhaps someone who knows and
 > > understand those intricate details of Tcl 8.4 can speak up?
 > >
 > > -- Dossy
 > >
 > > --
 > > Dossy Shiobara                       mail: [EMAIL PROTECTED]
 > > Panoptic Computer Network             web: http://www.panoptic.com/
 > >   "He realized the fastest way to change is to laugh at your own
 > >     folly -- then you can let go and quickly move on." (p. 70)
 > >
 > >
 > > --
 > > AOLserver - http://www.aolserver.com/
 > >
 > > To Remove yourself from this list, simply send an email to
 > > <[EMAIL PROTECTED]> with the
 > > body of "SIGNOFF AOLSERVER" in the email message. You can leave the
 > > Subject: field of your email blank.
 >
 >
 > --
 > AOLserver - http://www.aolserver.com/
 >
 > To Remove yourself from this list, simply send an email to
 > <[EMAIL PROTECTED]> with the
 > body of "SIGNOFF AOLSERVER" in the email message. You can leave the
 > Subject: field of your email blank.


--
AOLserver - http://www.aolserver.com/

To Remove yourself from this list, simply send an email to <[EMAIL PROTECTED]> with the
body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: field of 
your email blank.

Reply via email to