This is an automated email from the ASF dual-hosted git repository.

mxmanghi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/tcl-rivet.git


The following commit(s) were added to refs/heads/master by this push:
     new 63572f9  Documenting new driver Tdbc for DIO. Reworking the lazy 
bridge documentation. Prepare RC1 for 3.2.6
63572f9 is described below

commit 63572f9393c12de769eaec2c2f946d3e4f4470df
Author: Massimo Manghi <mxman...@apache.org>
AuthorDate: Sun Nov 3 01:21:02 2024 +0100

    Documenting new driver Tdbc for DIO. Reworking the lazy bridge 
documentation. Prepare RC1 for 3.2.6
---
 ChangeLog                         |   2 +
 doc/xml/dio.xml                   |   9 +-
 doc/xml/lazybridge.xml            | 257 +++++++++++++++++++-------------------
 src/mod_rivet_ng/rivet_lazy_mpm.c |   7 --
 4 files changed, 135 insertions(+), 140 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index 0b3fb9f..9654fb3 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,7 @@
 2024-02-02 Massimo Manghi <mxman...@apache.org>
     * doc/xml/dio.xml: Documenting the support of Tdbc series of DBMS 
connectors
+    * doc/xml/lazybridge.xml: update with latest version of code
+    * src/mod_rivet_ng/rivet_lazy_mpm.c: cleaned code removing lines that had 
been commented out
 
 2024-09-09 Massimo Manghi <mxman...@apache.org>
        * rivet/packages/dio: merging branch tdbc with DIO 1.2 and new dio 
connectors
diff --git a/doc/xml/dio.xml b/doc/xml/dio.xml
index 14165bc..8220a42 100644
--- a/doc/xml/dio.xml
+++ b/doc/xml/dio.xml
@@ -18,6 +18,8 @@
                    <arg><replaceable>option</replaceable></arg>
                    <arg>...</arg>
                  </group>
+               </cmdsynopsis>
+               <cmdsynopsis>
                  <command>::DIO::handle Tdbc</command>
                  <arg choice="plain"><replaceable>interface</replaceable></arg>
                  <arg choice="opt"><replaceable>objectName</replaceable></arg>
@@ -48,14 +50,15 @@
       <option>Oracle</option> and <option>Sqlite</option>.
       Start with version 1.2 DIO supports also the  <ulink 
url="https://https://core.tcl-lang.org/tdbc";>TDBC</ulink>
       interface through the <option>Tdbc</option> interfaces.
-      In this form the command requires a further argument for the
-      TDBC supported DBMS driver <option>mysql</option>,
+      In this form the command requires a further argument for one of the
+      TDBC supported DBMS driver <option>mysql</option>. TDBC drivers are:
+      <option>mysql</option> (supports also MariaDB),
       <option>odbc</option> (provides also support for 
<option>Oracle</option>), 
       <option>postgresql</option> and <option>sqlite</option>
        </para>
        <para>
          If <option><replaceable>objectName</replaceable></option> is
-         specified, DIO creates an object of that name.  If there is
+         specified, DIO creates an object of that name. If there is
          no <option><replaceable>objectName</replaceable></option>
          given, DIO will automatically generate a unique object ID
        </para>
diff --git a/doc/xml/lazybridge.xml b/doc/xml/lazybridge.xml
index 4b95f0b..5360012 100644
--- a/doc/xml/lazybridge.xml
+++ b/doc/xml/lazybridge.xml
@@ -1,17 +1,17 @@
 <section id="lazybridge">
     <title>Example: the <quote>Lazy</quote> bridge</title>
        <section>
-               <title>The rationale of threaded bridges</title>
-       <para>          
-               The 'bridge' concept was introduced to cope with the ability of 
-               the Apache HTTP web server to adopt different multiprocessing 
-               models by loading one of the available MPMs (Multi Processing 
Modules). 
-                       A bridge's task is to let mod_rivet fit the selected 
multiprocessing
-                       model in the first place. Still separating mod_rivet 
core
-                       functions from the MPM machinery provided also a 
solution for
-                       implementing a flexible and extensible design that 
enables 
-                       a programmer to develop alternative approaches to 
workload and 
-                       resource management. 
+       <title>The rationale of threaded bridges</title>
+    <para>     
+        The 'bridge' concept was introduced to cope with the ability of 
+        the Apache HTTP web server to adopt different multiprocessing 
+        models by loading one of the available MPMs (Multi Processing 
Modules). 
+        A bridge's task is to let mod_rivet fit the selected multiprocessing
+        model in the first place. Still separating mod_rivet core
+        functions from the MPM machinery provided also a solution for
+        implementing a flexible and extensible design that enables 
+        a programmer to develop alternative approaches to workload and 
+        resource management. 
        </para>
        <para>
                The Apache HTTP web server demands its modules to
@@ -27,17 +27,15 @@
        </para>
        </section>
        <section>
-        <title>Lazy bridge data structures</title>
+       <title>Lazy bridge data structures</title>
     <para>
-          The lazy bridge was initially developed to outline the basic tasks
-       carried out by each function making a rivet MPM bridge. 
-       The lazy bridge attempts to be minimalist
-       but it's nearly fully functional, only a few configuration
-       directives (SeparateVirtualInterps and SeparateChannel)
-       are ignored because fundamentally incompatible. 
-       The bridge is experimental but perfectly fit for many applications,
-       for example it's good on development machines where server restarts
-       are frequent. 
+       The lazy bridge was initially developed to outline the basic tasks 
carried
+        out by each function making a rivet MPM bridge. The lazy bridge 
attempts
+        to be minimalist but it's nearly fully functional, only a few 
configuration
+        directives (SeparateVirtualInterps and SeparateChannel) are ignored 
because
+        fundamentally incompatible. The bridge is experimental but perfectly 
fit
+        for many applications, for example it's good on development machines 
where
+        server restarts are frequent. 
     </para>
 
     <para>
@@ -71,8 +69,8 @@ typedef struct vhost_iface {
 } vhost;</programlisting>
 
        <para>
-               A pointer to this data structure array is stored in the bridge 
status which a basic
-               structure that likely every bridge has to create.
+               A pointer to this data structure array is stored in the bridge 
status which is a basic
+               structure that every bridge has to create.
        </para>
        <programlisting>/* Lazy bridge internal status data */
 
@@ -86,7 +84,7 @@ typedef struct mpm_bridge_status {
 
        <para>
                The lazy bridge also extends the thread private data structure 
with the
-               data concerning the Tcl intepreter, its configuration and 
+               data concerning the Tcl intepreter 
        </para>
 
        <programlisting>/* lazy bridge thread private data extension */
@@ -94,9 +92,6 @@ typedef struct mpm_bridge_status {
 typedef struct mpm_bridge_specific {
     rivet_thread_interp*  interp;           /* thread Tcl interpreter object   
     */
     int                   keep_going;       /* thread loop controlling 
variable     */
-                                            /* the request_rec and 
TclWebRequest    *
-                                             * are copied here to be passed to 
a    *
-                                             * channel                         
     */
 } mpm_bridge_specific;</programlisting>
 
        <para>
@@ -104,8 +99,7 @@ typedef struct mpm_bridge_specific {
                and store its pointer in <command>module_globals->mpm</command>.
                This is usually done at the very beginning of the child init 
script function pointed by 
                <command>mpm_child_init</command> in the 
<command>rivet_bridge_table</command> structure.
-               For the lazy bridge this field in the jump table points to 
<command>Lazy_MPM_ChildInit</command>
-               function
+               For the lazy bridge this field points to 
<command>Lazy_MPM_ChildInit</command> function
        </para>
        <programlisting>/*
  * -- LazyBridge_ChildInit
@@ -174,18 +168,18 @@ void LazyBridge_ChildInit (apr_pool_t* pool, server_rec* 
server)
        <para>
                Most fields in the <command>mpm_bridge_status</command> are 
meant to deal
                with the child exit process. Rivet supersedes the Tcl core's 
exit function
-               with a <command>::rivet::exit</command> function and it does so 
in order to curb the effects
-               of the core function that would force a child process to 
immediately exit. 
-               This could have unwanted side effects, like skipping the 
execution of important
-               code dedicated to release locks or remove files. For threaded 
MPMs the abrupt
-               child process termination could be even more disruptive as all 
the threads
-               will be deleted without warning.        
+               with <command>::rivet::exit</command> command and it does so in 
order to curb
+        the side effects of the core function Tcl_Exit that would force a 
whole child process
+        to exit immediately. This could have unwanted consequences in any
+        environment, like missing the execution of code dedicated to release 
locks or
+        remove files. For threaded MPMs the abrupt child process termination 
could be
+        even more disruptive as all threads will be deleted right away.
        </para>
        <para>
                The <command>::rivet::exit</command> implementation calls the 
function pointed by
                <command>mpm_exit_handler</command> which is bridge specific. 
Its main duty
                is to take the proper action in order to release resources and 
force the
-               bridge controlled threads to exit.  
+               bridge controlled threads to exit.
        </para>
        <note>
                Nonetheless the <command>exit</command> command should be 
avoided in ordinary mod_rivet
@@ -197,7 +191,7 @@ void LazyBridge_ChildInit (apr_pool_t* pool, server_rec* 
server)
                <command>exit</command> in a sort of special 
<command>::rivet::abort_page</command>
                implementation whose eventual action is to call the 
<command>Tcl_Exit</command>
                library call. See the <command><xref 
linkend="exit">::rivet::exit</xref></command>
-               command for further explanations.
+               command code for further explanations.
        </note>
        <para>
                Both the worker bridge and lazy bridge 
@@ -249,38 +243,39 @@ typedef struct lazy_tcl_worker {
                created. The code in the <command>Lazy_MPM_Request</command> 
easy to understand and shows
                how this is working
        </para>
-       <programlisting>/* -- Lazy_MPM_Request
+       <programlisting>/* -- LazyBridge_Request
  *
- * The lazy bridge HTTP request function. This function 
+ * The lazy bridge HTTP request function. This function
  * stores the request_rec pointer into the lazy_tcl_worker
  * structure which is used to communicate with a worker thread.
  * Then the array of idle threads is checked and if empty
  * a new thread is created by calling create_worker
  */
 
-int Lazy_MPM_Request (request_rec* r,rivet_req_ctype ctype)
+int LazyBridge_Request (request_rec* r,rivet_req_ctype ctype)
 {
     lazy_tcl_worker*    w;
     int                 ap_sts;
-    rivet_server_conf*  conf = 
RIVET_SERVER_CONF(r-&gt;server-&gt;module_config);
+    rivet_server_conf*  conf = RIVET_SERVER_CONF(r->server->module_config);
     apr_array_header_t* array;
     apr_thread_mutex_t* mutex;
 
-    mutex = module_globals-&gt;mpm-&gt;vhosts[conf-&gt;idx].mutex;
-    array = module_globals-&gt;mpm-&gt;vhosts[conf-&gt;idx].array;
+    mutex = module_globals->mpm->vhosts[conf->idx].mutex;
+    array = module_globals->mpm->vhosts[conf->idx].array;
     apr_thread_mutex_lock(mutex);
 
-   /* This request may have come while the child process was 
-    * shutting down. We cannot run the risk that incoming requests 
-    * may hang the child process by keeping its threads busy, 
-    * so we simply return an HTTP_INTERNAL_SERVER_ERROR. 
-    * This is hideous and explains why the 'exit' commands must 
-    * be avoided at any costs when programming with mod_rivet
-    */
-
-    if (module_globals-&gt;mpm-&gt;server_shutdown == 1) {
-        ap_log_rerror(APLOG_MARK, APLOG_ERR, APR_EGENERAL, r,
-                      MODNAME &quot;: http request aborted during child 
process shutdown&quot;);
+    /* This request may have come while the child process was
+     * shutting down. We cannot run the risk that incoming requests
+     * may hang the child process by keeping its threads busy,
+     * so we simply return an HTTP_INTERNAL_SERVER_ERROR.
+     * This is hideous and explains why the 'exit' commands must
+     * be avoided at any costs when programming with mod_rivet
+     */
+
+    if (module_globals->mpm->server_shutdown == 1)
+    {
+        ap_log_rerror(APLOG_MARK,APLOG_ERR,APR_EGENERAL,r,
+                      MODNAME ": http request aborted during child process 
shutdown");
         apr_thread_mutex_unlock(mutex);
         return HTTP_INTERNAL_SERVER_ERROR;
     }
@@ -289,8 +284,7 @@ int Lazy_MPM_Request (request_rec* r,rivet_req_ctype ctype)
 
     if (apr_is_empty_array(array))
     {
-        w = create_worker(module_globals-&gt;pool,r-&gt;server);
-        (module_globals-&gt;mpm-&gt;vhosts[conf-&gt;idx].threads_count)++; 
+        w = create_worker(module_globals->pool,r->server);
     }
     else
     {
@@ -298,25 +292,27 @@ int Lazy_MPM_Request (request_rec* r,rivet_req_ctype 
ctype)
     }
 
     apr_thread_mutex_unlock(mutex);
-    
-    apr_thread_mutex_lock(w-&gt;mutex);
-    w-&gt;r        = r;
-    w-&gt;ctype    = ctype;
-    w-&gt;status   = init;
-    w-&gt;conf     = conf;
-    apr_thread_cond_signal(w-&gt;condition);
+
+    /* Locking the thread descriptor structure mutex */
+
+    apr_thread_mutex_lock(w->mutex);
+    w->r        = r;
+    w->ctype    = ctype;
+    w->status   = init;
+    w->conf     = conf;
+    apr_thread_cond_signal(w->condition);
 
     /* we wait for the Tcl worker thread to finish its job */
 
-    while (w-&gt;status != done) {
-        apr_thread_cond_wait(w-&gt;condition,w-&gt;mutex);
-    } 
-    ap_sts = w-&gt;ap_sts;
+    while (w->status != done) {
+        apr_thread_cond_wait(w->condition,w->mutex);
+    }
+    ap_sts = w->ap_sts;
 
-    w-&gt;status = idle;
-    w-&gt;r      = NULL;
-    apr_thread_cond_signal(w-&gt;condition);
-    apr_thread_mutex_unlock(w-&gt;mutex);
+    w->status = idle;
+    w->r      = NULL;
+    apr_thread_cond_signal(w->condition);
+    apr_thread_mutex_unlock(w->mutex);
 
     return ap_sts;
 }</programlisting>     
@@ -330,54 +326,53 @@ int Lazy_MPM_Request (request_rec* r,rivet_req_ctype 
ctype)
        <programlisting>/*
  * -- request_processor
  *
- * The lazy bridge worker thread. This thread initialized its control data and 
+ * The lazy bridge worker thread. This thread initializes its control data and
  * prepares to serve requests addressed to the virtual host which is meant to 
work
- * as a content generator. Virtual host server data are stored in the 
+ * as a content generator. Virtual host server data are stored in the
  * lazy_tcl_worker structure stored in the generic pointer argument 'data'
- * 
+ *
  */
 
 static void* APR_THREAD_FUNC request_processor (apr_thread_t *thd, void *data)
 {
-    lazy_tcl_worker*        w = (lazy_tcl_worker*) data; 
+    lazy_tcl_worker*        w = (lazy_tcl_worker*) data;
     rivet_thread_private*   private;
     int                     idx;
     rivet_server_conf*      rsc;
 
     /* The server configuration */
 
-    rsc = RIVET_SERVER_CONF(w-&gt;server-&gt;module_config);
+    rsc = RIVET_SERVER_CONF(w->gt;server->gt;module_config);
 
     /* Rivet_ExecutionThreadInit creates and returns the thread private data. 
*/
 
     private = Rivet_ExecutionThreadInit();
 
-    /* A bridge creates and stores in private-&gt;ext its own thread private
-     * data. The lazy bridge is no exception. We just need a flag controlling 
+    /* A bridge creates and stores in private->gt;ext its own thread private
+     * data. The lazy bridge is no exception. We just need a flag controlling
      * the execution and an intepreter control structure */
 
-    private-&gt;ext = 
apr_pcalloc(private-&gt;pool,sizeof(mpm_bridge_specific));
-    private-&gt;ext-&gt;keep_going = 1;
-    //private-&gt;ext-&gt;interp = 
Rivet_NewVHostInterp(private-&gt;pool,w-&gt;server);
-    
RIVET_POKE_INTERP(private,rsc,Rivet_NewVHostInterp(private-&gt;pool,rsc-&gt;default_cache_size));
-    private-&gt;ext-&gt;interp-&gt;channel = private-&gt;channel;
+    private->gt;ext = 
apr_pcalloc(private->gt;pool,sizeof(mpm_bridge_specific));
+    private->gt;ext->gt;keep_going = true;
+    
RIVET_POKE_INTERP(private,rsc,Rivet_NewVHostInterp(private->gt;pool,rsc->gt;default_cache_size));
+    private->gt;ext->gt;interp->gt;channel = private->gt;channel;
 
-    /* The worker thread can respond to a single request at a time therefore 
+    /* The worker thread can respond to a single request at a time therefore
        must handle and register its own Rivet channel */
 
-    
Tcl_RegisterChannel(private-&gt;ext-&gt;interp-&gt;interp,*private-&gt;channel);
+    
Tcl_RegisterChannel(private->gt;ext->gt;interp->gt;interp,*private->gt;channel);
 
     /* From the rivet_server_conf structure we determine what scripts we
      * are using to serve requests */
 
-    private-&gt;ext-&gt;interp-&gt;scripts = 
-            Rivet_RunningScripts 
(private-&gt;pool,private-&gt;ext-&gt;interp-&gt;scripts,rsc);
+    private->gt;ext->gt;interp->gt;scripts =
+            Rivet_RunningScripts 
(private->gt;pool,private->gt;ext->gt;interp->gt;scripts,rsc);
 
     /* This is the standard Tcl interpreter initialization */
 
-    
Rivet_PerInterpInit(private-&gt;ext-&gt;interp,private,w-&gt;server,private-&gt;pool);
-    
-    /* The child initialization is fired. Beware of the terminologic 
+    
Rivet_PerInterpInit(private->gt;ext->gt;interp,private,w->gt;server,private->gt;pool);
+
+    /* The child initialization is fired. Beware the terminological
      * trap: we inherited from fork capable systems the term 'child'
      * meaning 'child process'. In this case the child init actually
      * is a worker thread initialization, because in a threaded module
@@ -386,62 +381,60 @@ static void* APR_THREAD_FUNC request_processor 
(apr_thread_t *thd, void *data)
 
     Lazy_RunConfScript(private,w,child_init);
 
-    idx = w-&gt;conf-&gt;idx;
+    idx = w->gt;conf->gt;idx;
 
-    /* After the thread has run the configuration script we 
+    /* After the thread has run the configuration script we
        increment the threads counter */
 
-    apr_thread_mutex_lock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
-    (module_globals-&gt;mpm-&gt;vhosts[idx].threads_count)++;
-    apr_thread_mutex_unlock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
+    apr_thread_mutex_lock(module_globals->gt;mpm->gt;vhosts[idx].mutex);
+    (module_globals->gt;mpm->gt;vhosts[idx].threads_count)++;
+    apr_thread_mutex_unlock(module_globals->gt;mpm->gt;vhosts[idx].mutex);
 
-    /* The thread is now set up to serve request within the the 
-     * do...while loop controlled by private-&gt;keep_going  */
+    /* The thread is now set up to serve request within the the
+     * do...while loop controlled by private->gt;keep_going  */
 
-    apr_thread_mutex_lock(w-&gt;mutex);
-    do 
+    apr_thread_mutex_lock(w->gt;mutex);
+    do
     {
-        while ((w-&gt;status != init) &amp;&amp; (w-&gt;status != 
thread_exit)) {
-            apr_thread_cond_wait(w-&gt;condition,w-&gt;mutex);
-        } 
-        if (w-&gt;status == thread_exit) {
-            private-&gt;ext-&gt;keep_going = 0;
+        while ((w->gt;status != init) &amp;&amp; (w->gt;status != 
thread_exit)) {
+            apr_thread_cond_wait(w->gt;condition,w->gt;mutex);
+        }
+        if (w->gt;status == thread_exit) {
+            private->gt;ext->gt;keep_going = false;
             continue;
         }
 
-        w-&gt;status = processing;
+        w->gt;status = processing;
 
         /* Content generation */
 
-        private-&gt;req_cnt++;
-        private-&gt;ctype = w-&gt;ctype;
-        private-&gt;r = w-&gt;r;
+        private->gt;req_cnt++;
+        private->gt;ctype = w->gt;ctype;
+        private->gt;r = w->gt;r;
 
-        w-&gt;ap_sts = Rivet_SendContent(private);
+        w->gt;ap_sts = Rivet_SendContent(private);
 
-        // if (module_globals-&gt;mpm-&gt;server_shutdown) continue;
+        w->gt;status = done;
+        apr_thread_cond_signal(w->gt;condition);
+        while (w->gt;status == done) {
+            apr_thread_cond_wait(w->gt;condition,w->gt;mutex);
+        }
 
-        w-&gt;status = done;
-        apr_thread_cond_signal(w-&gt;condition);
-        while (w-&gt;status == done) {
-            apr_thread_cond_wait(w-&gt;condition,w-&gt;mutex);
-        } 
- 
         /* rescheduling itself in the array of idle threads */
-       
-        apr_thread_mutex_lock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
-        *(lazy_tcl_worker **) 
apr_array_push(module_globals-&gt;mpm-&gt;vhosts[idx].array) = w;
-        apr_thread_mutex_unlock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
 
-    } while (private-&gt;ext-&gt;keep_going);
-    apr_thread_mutex_unlock(w-&gt;mutex);
-    
+        apr_thread_mutex_lock(module_globals->gt;mpm->gt;vhosts[idx].mutex);
+        *(lazy_tcl_worker **) 
apr_array_push(module_globals->gt;mpm->gt;vhosts[idx].array) = w;
+        apr_thread_mutex_unlock(module_globals->gt;mpm->gt;vhosts[idx].mutex);
+
+    } while (private->gt;ext->gt;keep_going);
+    apr_thread_mutex_unlock(w->gt;mutex);
+
     Lazy_RunConfScript(private,w,child_exit);
-    ap_log_error(APLOG_MARK,APLOG_DEBUG,APR_SUCCESS,w-&gt;server,"processor 
thread orderly exit");
+    ap_log_error(APLOG_MARK,APLOG_DEBUG,APR_SUCCESS,w->gt;server,"processor 
thread orderly exit");
 
-    apr_thread_mutex_lock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
-    (module_globals-&gt;mpm-&gt;vhosts[idx].threads_count)--;
-    apr_thread_mutex_unlock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
+    apr_thread_mutex_lock(module_globals->gt;mpm->gt;vhosts[idx].mutex);
+    (module_globals->gt;mpm->gt;vhosts[idx].threads_count)--;
+    apr_thread_mutex_unlock(module_globals->gt;mpm->gt;vhosts[idx].mutex);
 
     apr_thread_exit(thd,APR_SUCCESS);
     return NULL;
@@ -452,11 +445,15 @@ static void* APR_THREAD_FUNC request_processor 
(apr_thread_t *thd, void *data)
                request, has a straightforward task to do since by design each 
thread has
                one interpreter
        </para>
-       <programlisting>rivet_thread_interp* 
Lazy_MPM_Interp(rivet_thread_private *private,
-                                     rivet_server_conf* conf)
+       <programlisting>rivet_thread_interp* LazyBridge_Interp 
(rivet_thread_private* private,
+                                      rivet_server_conf*    conf,
+                                      rivet_thread_interp*  interp)
 {
+    if (interp != NULL) { private->ext->interp = interp; }
+
     return private->ext->interp;
-}</programlisting>
+}
+</programlisting>
        <para>
                As already pointed out
                running this bridge you get separate virtual interpreters and 
separate channels by default
diff --git a/src/mod_rivet_ng/rivet_lazy_mpm.c 
b/src/mod_rivet_ng/rivet_lazy_mpm.c
index 0c0a7ed..da2f130 100644
--- a/src/mod_rivet_ng/rivet_lazy_mpm.c
+++ b/src/mod_rivet_ng/rivet_lazy_mpm.c
@@ -83,9 +83,6 @@ typedef struct mpm_bridge_status {
 typedef struct mpm_bridge_specific {
     rivet_thread_interp*  interp;           /* thread Tcl interpreter object   
     */
     bool                  keep_going;       /* thread loop controlling 
variable     */
-                                            /* the request_rec and 
TclWebRequest    *
-                                             * are copied here to be passed to 
a    *
-                                             * channel                         
     */
 } mpm_bridge_specific;
 
 enum {
@@ -177,7 +174,6 @@ static void* APR_THREAD_FUNC request_processor 
(apr_thread_t *thd, void *data)
 
     private->ext = apr_pcalloc(private->pool,sizeof(mpm_bridge_specific));
     private->ext->keep_going = true;
-    //private->ext->interp = Rivet_NewVHostInterp(private->pool,w->server);
     
RIVET_POKE_INTERP(private,rsc,Rivet_NewVHostInterp(private->pool,rsc->default_cache_size));
     private->ext->interp->channel = private->channel;
 
@@ -238,8 +234,6 @@ static void* APR_THREAD_FUNC request_processor 
(apr_thread_t *thd, void *data)
 
         w->ap_sts = Rivet_SendContent(private);
 
-        // if (module_globals->mpm->server_shutdown) continue;
-
         w->status = done;
         apr_thread_cond_signal(w->condition);
         while (w->status == done) {
@@ -395,7 +389,6 @@ int LazyBridge_Request (request_rec* r,rivet_req_ctype 
ctype)
     if (apr_is_empty_array(array))
     {
         w = create_worker(module_globals->pool,r->server);
-        //(module_globals->mpm->vhosts[conf->idx].threads_count)++;
     }
     else
     {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@tcl.apache.org
For additional commands, e-mail: commits-h...@tcl.apache.org

Reply via email to