I don't know if this is the right list, but I didn't found a better one for
the problem.

 

At the moment I'am developing an application-server with different services
in C++ on Linux, Windows and Solaris.

The server-admin can implement own servicefunctions by configuring and
implementing perl-scripts using XS-functions for interaction with server
(XML-DOM, call other servicefunctions, logging, etc.). This works great, but
there are memory leaks and sometimes the server crashed in the
perl-interpreter depended on the used modules (I guess a multithreaded
issue).

 

So, I decided to implement a special perl-corba-server for running the
scripts. 

Each perl script is parsed and stored in an interpreter instance using a
stl-map. 

 

For faster running, I decided to parse the scripts at startup of the
perlserver.

To make it stable I fork the server process.

 

I saw that every interpreter-instance will allocate about 1MB of RAM
(Linux). 

So dealing with about 250 scripts, this allocates about 250 MB of RAM.

 

Now my questions:

                Did I understood or implemented something wrong (interpreter
per script - one interpreter for all) ?

Why does an Interpreter-instance not running, uses this memory amount
(environment) ?

                Is it possible to compile and store the scripts without an
complete interpreter instance ?

                .

 

Regards Michael Ganz

 

 

 

Reply via email to