Re: Thread::Pool problem

2002-12-01 Thread Elizabeth Mattijsen
At 19:45 +0100 11/29/02, Pasquale Pagano wrote:

A lot of time ago some of you help us and your contribution to our work has
been very important.
We use the Thread::Pool version 0.22 module in our system and it works very
well
(under solaris SunOS 5.8 sun4u sparc SUNW,UltraAX-i2).
Now we have installed the new version 0.28 of the module under a Linux 2.4.x
os with
Perl 5.8.0 and apache 1.3.26/mod_perl 1.27 (the same configuration installed
under SUN).


A lot of things have happened between the 0.22 and 0.28 versions of 
Thread::Pool.

Could you, as a first test, downgrade to 0.27 of Thread::Pool, available from:

  http:.//www.liz.nl/CPAN/obsolete/Thread-Pool-0.27.tar.gz

Also: the error message points to something really strange going on. 
This may sound strange, but have you tried changing the order in 
which modules are loaded (where that is possible)?


Can help us again?


This is the best I can come up with right now.  Looking forward to 
hearing your findings...


Liz


Re: AutoLoader bypass?

2002-08-20 Thread Elizabeth Mattijsen

At 06:24 PM 8/19/02 -0700, Randy J. Ray wrote:
>>Well, my C would be _outside_ any of the loaded modules in
>>the mod_perl startup.pl script after all the modules necessary for proper
>>execution of _your_ mod_perl environment, are loaded.
>I see... you mean to have a line like this:
>use AutoLoader preload => { module => [qw(suba subb)] };
>be responsible for both including "module" (into the caller's namespace) *and* 
>pre-loading the specified routines? That's different that what I had interpreted from 
>your first idea. I though that the preload specification would be when the target 
>module issues its call to "use AutoLoader".

Actually only the preloading part.  Since by default, the preload routine would look 
at %INC, this C should be _after_ any other C commands.


>,,, That's different that what I had interpreted from your first idea. I though that 
>the preload specification would be when the target module issues its call to "use 
>AutoLoader".

No, I wouldn't want module authors to change their modules...


> From this vantage point, it does make more sense, yes. Especially since module 
>authors would no be responsible for retro-fitting to their packages. I would be 
>interested to see if this can be done cleanly, without making AutoLoader.pm any 
>harder to read than it currently is :-).

Well, that's just a matter of documentation, really...  ;-)


>(OK, that might be asking a bit much...)

Not really, AutoLoader is not a really big module at all...


Liz




Re: AutoLoader bypass?

2002-08-19 Thread Elizabeth Mattijsen

At 02:37 PM 8/19/02 -0700, Randy J. Ray wrote:
>>   use AutoLoader 'preload';  # preload anything you can find in %INC
>>   use AutoLoader preload => { module => '' }; # all from specific module
>>   use AutoLoader preload => { module => [qw(suba subb)] }; # only specific
>>Would that make sense?
>Problem with that approach is that it doesn't offer any control over whether
>you are pre-loading or not. If you are going to pre-load a certain sub-set of
>routines all the time, just put them above the __END__ token and don't burden
>AutoLoader with new functionality at all.

Well, my C would be _outside_ any of the loaded modules in the 
mod_perl startup.pl script after all the modules necessary for proper execution of 
_your_ mod_perl environment, are loaded.



>What I was suggesting was a way that I, as the writer of (picking one of my
>own modules as an example) RPC::XML::Server can incorporate something in the
>compile-time logic so that Apache::RPC::Server contains:
>use RPC::XML::Server 'compile';
>And RPC::XML::Server can have:
>sub import {
> AutoLoader::preload(__PACKAGE__, @list_o_routines)
> if (lc $_[1] eq 'compile');
>}
>(Admittedly a simplistic example, but I hope it gets the point across)

That also makes sense, but wasn't my original idea.

I'd rather make this a class method, so:

   AutoLoader->preload( @list_o_routines );

and have the module derive the module name from caller().  That would at least 
simplify the call and it would reduce the risk of abuse.


>This way, I only pre-load routines in the cases where I need it done. Your
>suggestion is good for modules that are only ever used under mod_perl, but
>modules may be designed to work in other environments. Oh, you could manage
>to get the same affect as my idea using a BEGIN block and conditional calls
>of "use AutoLoader", but the above seems to me to be much cleaner.

I'm more interested in the modules that either work in a mod_perl environment and a 
"normal" Perl environment.  If a module is supposed to be used in mod_perl, it just 
shouldn't use AutoLoader.


Liz




Re: AutoLoader bypass?

2002-08-19 Thread Elizabeth Mattijsen

At 02:05 PM 8/19/02 -0700, Randy J. Ray wrote:
>>Because routines are loaded when they are requested, they may be loaded in
>>child processes, causing duplication in memory because they are not shared.
>>They would be shared if they would be loaded in the initialization phase
>>before the child processes are forked.  But in that phase you wouldn't call
>>all of those routines that you might need in the child processes.
>The problem I would anticipate would be in having a portable way of locating the code 
>to load without having it executed. You could pull some functionality out of 
>AutoLoader, but then you have code duplication.

Indeed.  Most of the necessary code is in AutoLoader::import and AutoLoader::AUTOLOAD 
already.


>Or, an idea that just hit me, you could provide a call in the AutoLoader module that 
>does the job for you. It would have access to all the logic already in the module, 
>and module-writers could use it conditionally a la:
>AutoLoader::preload(__PACKAGE__, @routines)
> if $running_under_modperl;
>Where the @routines list is optional, and defaults to every *.al file found for 
>__PACKAGE__.

I was more thinking along:

  use AutoLoader; # current behaviour

  use AutoLoader 'AUTOLOAD'; # import AUTOLOAD

  use AutoLoader 'preload';  # preload anything you can find in %INC

  use AutoLoader preload => { module => '' }; # all from specific module

  use AutoLoader preload => { module => [qw(suba subb)] }; # only specific


Would that make sense?


Liz




AutoLoader bypass?

2002-08-19 Thread Elizabeth Mattijsen

It wasn't until recently that I realized that the functionality of AutoLoader might 
actually be counter-productive for mod_perl, at least in a prefork MPM.

Because routines are loaded when they are requested, they may be loaded in child 
processes, causing duplication in memory because they are not shared.  They would be 
shared if they would be loaded in the initialization phase before the child processes 
are forked.  But in that phase you wouldn't call all of those routines that you might 
need in the child processes.

I was therefore thinking about the development of a module that would just go through 
%INC and check each of the modules for auto/*.al files and do() all of them (possibly 
limited by a hash keyed to module names with subroutine lists).  And then possibly 
disable AutoLoader altogether, so no memory would inadvertently be lost by routines 
being loaded by child processes.


Does such a beast exist already? If not, would such a module make sense?  What would 
be a good name for it?


Liz




Re: R: worker thread

2002-07-16 Thread Elizabeth Mattijsen

At 11:19 PM 7/16/02 +, Stas Bekman wrote:
>> From the command line, it works very well.
>>We are impleminting a very complex digital library system.
>>In same cases, we want to start parallel threads in order to minimize the
>>wait.
>>Let me try to explain with an example.
>>'A' start 4 threads, each of which prepares, and sends a request to another
>>server, and then collects its result. When all threads will be terminated,
>>'A' will merge the 4 results.
>>Is now more clear?

You should be able to use Thread::Pool for this.

   $pool = Thread::Pool->new(
{
 workers => 10, # or higher or lower, max simultaneous requests
 do => sub {fetch from URL and return it},
} );

@jobid = ();
push( @jobid,$pool->job( $_ ) ) foreach @url;
foreach (@jobid) {
   my $result = $pool->result; # do whatever you want with result X
}



>your problem is that you use the *old* threads (5005threads, pre-5.6.0), 
>whereas mod_perl 2.0 is using ithreads (5.6.0+).
>>>my $t2 = new Thread(\&my_thread,'t2');
>do 'perldoc threads' with perl-5.8.0

Actually, to add to the confusion: only the Thread.pm and Thread::Signal.pm 
modules are old 5.005threads modules.  All the other Thread:: namespace 
modules (except Malcolm Beattie's old version of Thread::Pool on CPAN) are 
new "ithreads" modules.   Only the true "pragma" modules threads.pm and 
threads::shared.pm have remaind untouched.  This was changed last week, as 
described in Rafael's p5p 
summary  http://use.perl.org/article.pl?sid=02/07/15/0732235   ;-)


Liz




Re: Working Directory

2002-07-16 Thread Elizabeth Mattijsen

At 06:10 PM 7/16/02 +, Stas Bekman wrote:
>>Arthur told me he either had, or was going to fix this (on IRC).
>Yup, Arthur is working on an external package (ex::threads::safecwd?) 
>which should solve this problem. Viva Arthur! I'll keep you updated once 
>it gets released.

Check out Arthur's article on Perl.com:

   http://www.perl.com/pub/a/2002/06/11/threads.html


Liz




Re: TIPool / multiple database connections

2002-07-16 Thread Elizabeth Mattijsen

At 02:57 PM 7/16/02 +, Stas Bekman wrote:
>Perrin Harkins wrote:
>>Hmmm... That could really throw a wrench in things.  If you have an 
>>object based on a hash, and you share that hash, and you re-bless the 
>>object in each thread, does that work?  What if the hash contains 
>>references to other variables.  Do they need to be explicity shared as well?
>That's what I meant. Probably non need for Thread::Pool, at all. use a 
>shared datastructure, maintain a list of free and busy items and simply 
>hand pointers inside this datastructure to the threads asking for an item. 
>e.g.:
>
>package DBI::Pool;
>use threads::shared;
>my @pool : shared;
>sub init {} # pre-populate pool with N connections
>sub get {}  # return a ref to $dbh, grow the pool if needed
>sub put {}  # move the pointer from the busy list to the free list

Hmmm... as long as you do this _before_ the (Apache) threads get started, 
this might work.  I still haven't got my mind entirely around what one is 
allowed to do, what you can do and is allowed, what you can't do but is 
allowed and crashes, and what in principle is possible but you're barred 
from because of e.g. prototyping getting in the way.


>won't this work? I guess Perrin is right in respect that the whole item 
>needs to be shared (deep-shared). can we have such an attribute/function 
>that will automatically traverse the datastructure and share it all? or is 
>this the case already with 'shared'?

Good question.  I don't think it is deep shared and that's why it probably 
doesn't work.  The way Thread::Queue::Any (which is the transport medium 
for Thread::Pool) handles this, is by serializing any data structure with 
Storable, pass that around and deserialize that on the other end.


>Now since we want to have various connections, it can be:
>my %pools : shared;
>where each key is a unique identifier, "compiled" from the dbi connect's 
>DSN string and a value is the actual pool.

That's an approach.  If you could actually share the $sth objects.  About 
which I have my doubts.


>BTW, there is no more need for Apache prefix in Apache::DBI, this can be a 
>generic Pool class. I guess Apache::DBI can subclass DBI::Pool and add 
>things like connect_on_init(), but just to build the initial pool when the 
>process starts.

DBI::Pool would be ok.  But unless I'm wrong about the sharing issues, 
you're going to be stuck, at least with this version of Perl, with 
serializing between threads.


Liz




Re: TIPool / multiple database connections

2002-07-15 Thread Elizabeth Mattijsen

At 01:14 AM 7/16/02 +, Stas Bekman wrote:
>>Hmmm...  I guess you're right.  I hadn't thought of applying Thread::Pool 
>>in this situation, but it sure makes sense.  This would however imply 
>>that jobs would be submitted from different threads.  That _should_ work, 
>>but I just realised that I don't have a test-case for that.  Will work on 
>>one and let you know the result.
>I think that's a reverse case, the pool creates the dbh items (tools) and 
>workers pick the items use them and then put back when they are done with 
>them. So it's the pool who creates the "things".

Hmm... but you won't be able to "fetch" the $dbh from the thread.  It can 
only live in _that_ thread.  You cannot "pass" objects between 
threads.  But you _can_ send queries to that thread, fetch a jobid for that 
job and then obtain whatever was returned as a Perl datastructure.

(if anyone knows of a way to pass objects between threads, I'd really would 
like to know)

With Thread::Pool you would do something like this:

  use Thread::Pool;
  my $pool = Thread::Pool->new(
   {
workers => 10,
pre => \&pre,
do = \&do,
   },
   database parameters );

  @result = $pool->wait( query parameters );


  sub pre {
my $dbh = (make database connection with @_);
# maybe "prepare" any statements
return $dbh;
  }

  sub do {
my $pool = shift;
my ($dbh) = $pool->pre;
# do whatever you want to do in the database, dependent on @_
# could be any standard list, data-structure, etc, but _not_ an object!
return @result;
  }


>btw, one thread should be able to pick more than one item at once. but in 
>this particular case of DBI, I think there should be a different pool for 
>each connectin group. similar to what Doug has suggested in his original 
>TIPool prototype in the overview doc.

Thread::Pool doesn't work that way.  You could have 1 database connection 
in one worker thread and 40 threads submitting jobs: they would be handled 
in the order they were submitted.  This effectively serializes access 
(which could be an approach for DBI drivers that do not support _any_ 
threading at all).

Or you could have 10 worker threads with 40 threads submitting jobs.  That 
would work faster if your database is threaded as well  ;-)


Liz




Re: TIPool / multiple database connections

2002-07-15 Thread Elizabeth Mattijsen

At 12:18 AM 7/16/02 +, Stas Bekman wrote:
>...A few folks at p5p are creating a bunch of new modules around threads:: 
>and threads::shared::, just yesterday a new module: Thread::Pool was 
>released by Elizabeth Mattijsen. Which seems to be what's needed for 
>Apache::DBITPool.

Hmmm...  I guess you're right.  I hadn't thought of applying Thread::Pool 
in this situation, but it sure makes sense.  This would however imply that 
jobs would be submitted from different threads.  That _should_ work, but I 
just realised that I don't have a test-case for that.  Will work on one and 
let you know the result.


Liz




Re: www.modperl.com .. ?

2002-04-05 Thread Elizabeth Mattijsen

At 11:24 AM 4/5/02 -0600, [EMAIL PROTECTED] wrote:
>According to the subscription notice ...
>  www.modperl.com is a resource site  but I've tried many times ...
>is this site still valid?

Works fine from here...


Elizabeth Mattijsen




Re: Open3

2002-04-03 Thread Elizabeth Mattijsen

At 01:44 PM 4/3/02 -0800, Rasoul Hajikhani wrote:
>Hello folks,
>I am writing a web based interface to gpg and am using IPC::Open3 and
>IO::Select to manage STDIN, STDOUT and STDERR handles. But, I can not
>get stdin to work properly. Here is my code:
>I am using perl 5.053 and Apache/1.3.14 Ben-SSL/1.42 (Unix) PHP/4.0.3pl1
>mod_perl/1.24_01.
>Can anyone see what am I doing wrong?

Make sure your IPC::Open3 is up-to-date.  If it is the version that came 
with Perl 5.053, it probably has a bug in it.  I remember having to patch 
it waaay back (like 5 years ago) to get it to work properly...


Elizabeth Mattijsen




Re: loss of shared memory in parent httpd

2002-03-12 Thread Elizabeth Mattijsen

At 12:43 AM 3/13/02 +0800, Stas Bekman wrote:
>Doug has plans for a much improved opcode tree sharing for mod_perl 2.0, 
>the details are kept as a secret so far :)

Can't wait to see that!


 >This topic is covered (will be) in the upcoming mod_perl book, where 
we >include the following reference materials
 >which you may find helpful for understanding the shared memory concepts.

Ah... ok...  can't wait for that either...  ;-)


>Don't you love mod_perl for what it makes you learn :)

Well, yes and no...  ;-)


Elizabeth Mattijsen




Re: loss of shared memory in parent httpd

2002-03-12 Thread Elizabeth Mattijsen

At 11:46 PM 3/12/02 +0800, Stas Bekman wrote:
>>I'm not sure whether my assessment of the problem is correct.  I would 
>>welcome any comments on this.
>Nope Elizabeth, your explanation is not so correct. ;)

Too bad...  ;-(


>Shared memory is not about sharing the pre-allocated memory pool (heap 
>memory). Once you re-use a bit of preallocated memory the sharing goes away.

I think the phrase is Copy-On-Write, right?  And since RAM is allocated in 
chunks, let's assume 4K for the sake of the argument, changing a single 
byte in such a chunk causes the entire chunk to be unshared.  In older 
Linux kernels, I believe to have seen that when a byte gets changed in a 
chunk of any child, that chunk becomes changed for _all_ children.  Newer 
kernels only unshare it for that particular child.  Again, if I'm not 
mistaken and someone please correct me if I'm wrong...

Since Perl is basically all data, you would need to find a way of 
localizing all memory that is changing to as few memory chunks as 
possible.  My idea was just that: by filling up all "used" memory before 
spawning children, you would use op some memory, but that would be shared 
between all children and thus not so bad.  But by doing this, you would 
hopefully cause all changing data to be localized to newly allocated memory 
by the children.  Wish someone with more Perl guts experience could tell me 
if that really is an idea that could work or not...


>Shared memory is about 'text'/read-only memory pages which never get 
>modified and pages that can get modified but as long as they aren't 
>modified they are shared. Unfortunately (in this aspect, but fortunately 
>for many other aspects) Perl is not a strongly-typed (or whatever you call 
>it) language, therefore it's extremely hard to share memory, because in 
>Perl almost everything is data. Though as you could see Bill was able to 
>share 43M out of 50M which is damn good!

As a proof of concept I have run more than 100 200MB+ children on a 1 GB 
RAM machine and had sharing go up so high causing the "top" number of bytes 
field for shared memory to cycle through its 32-bit range multiple 
times...  ;-) .  It was _real_ fast (had all of its data that it needed as 
Perl hashes and lists) and ran ok until something would start an avalanche 
effect and it would all go down in a whirlwind of swapping.  So in the end, 
it didn't work reliably enough  ;-(  But man, was it fast when it ran...  ;-)


Elizabeth Mattijsen




Re: loss of shared memory in parent httpd (2)

2002-03-12 Thread Elizabeth Mattijsen

Oops. Premature sending...

I have two ideas that might help:
- reduce number of global variables used, less memory pollution by lexicals
- make sure that you have the most up-to-date (kernel) version of your 
OS.  Newer Linux kernels seem to be a lot savvy at handling shared memory 
than older kernels.

Again, I wish you strength in fixing this problem...


Elizabeth Mattijsen




Re: loss of shared memory in parent httpd

2002-03-12 Thread Elizabeth Mattijsen

At 09:18 AM 3/12/02 -0500, Bill Marrs wrote:
>If anyone has any ideas what might cause the httpd parent (and new 
>children) to lose a big chunk of shared memory between them, please let me 
>know.

I've seen this happen many times.  One day it works fine, the next you're 
in trouble.  And in my experience, it's not a matter of why this avalanche 
effect happens, but is it more a matter of "why didn't it happen 
before"?  You may not have realised it that you were just below a 
"threshold" and now you're over it.  And the change can be as small as the 
size of a heavily used template that suddenly gets over an internal memory 
allocation border, which in turn causes Perl to allocate more, which in 
turn causes memory to become unshared.

I have been thinking about a perl/C routine that would internally "use" all 
of the memory that was already allocated by Perl.  Such a routine would 
need to be called when the initial start of Apache is complete so that any 
child that is spawned has a "saturated" memory pool, so that any new 
variables would need to use newly allocated memory, which would be 
unshared.  But at least all of that memory would be used for "new" 
variables and not have the tendency to pollute "old" memory segments.

I'm not sure whether my assessment of the problem is correct.  I would 
welcome any comments on this.

I have two ideas that might help:
-
- other than making sure that you have the most up-to-date (kernel) version 
of your OS.  Older Linux kernels seem to have this problem a lot more than 
newer kernels.

I wish you strength in fixing this problem...


Elizabeth Mattijsen




Re: [request] modperl mailing lists searchable archives wanted

2001-10-09 Thread Elizabeth Mattijsen

At 05:59 PM 10/9/01 +0800, Stas Bekman wrote:
>Please try to send links only for good archives with good search engines.
>Thanks a bunch!

Still in beta phase, and only containing Perl newsgroups, it nonetheless 
might be interesting to check out:

   http://news.search.nl/style/search.en/read/category/Programming_Languages 
http://news.search.nl/style/search.en/read/category/Programming_Languages/Pe 
rl/list/page1.html

Currently refreshed 4 times a day, with searching being refreshed once a day.

The site actually runs ModPerl with Matt Sergeant's LibXML and LibXSLT modules.




Elizabeth Mattijsen

Note: I am the main developer of this website, so I am prejudiced  ;-)




Re: Again, Modperl running scripts slower than non-modperl!?

2001-08-04 Thread Elizabeth Mattijsen

At 01:29 AM 8/5/01 -0500, John Buwa wrote:
>91 processes: 89 sleeping, 2 running, 0 zombie, 0 stopped
>CPU states:  0.0% user,  0.7% system,  0.0% nice, 99.2% idle
>Mem:   257408K av,  228384K used,   29024K free,   13744K shrd,5380K buff
>Swap:  265528K av,  184780K used,   80748K free8908K 
>cached
>  PID   USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM  CTIME COMMAND
>25788 nobody 0   0  126M 125M  1076 S 0.0 49.8   0:13 httpd 
><modperl server
>25787 nobody 0   0  196M  32M  1356 S 0.0 12.9   0:19 httpd 
><-modperl server
>25799 nobody 0   0 32592  30M 8 S 0.0 12.2   0:10 httpd<---Non 
>modperl server

Not having read anything before this, but it seems that your machine is 
going into swap because there is not enough RAM available.  That kills your 
performance always.  Could you run your test on a different machine or 
temporarily switch off the regular server?

Trying to run close to 200 Mbyte modperl Apaches on a 256 Mbyte machine is 
not going to work.  Have you looked at MaxRequestsPerChild?

But even the non-modperl servers at 30 Mbyte size seems ridiculously large: 
are you sure you need all the modules that you compiled into Apache?


Elizabeth Mattijsen




Re: Redirect with anchors.

2001-04-08 Thread Elizabeth Mattijsen

At 02:56 PM 4/8/01 +0200, Antti Linno wrote:
>$req_rec->header_out("Location" => "intranet.html?action=show#anchor_name");

I think you should provide the _complete_ URL, including the http://server
part.  Some browser do not handle incomplete URL's like this in a redirect 
correctly.  Please someone correct me if I'm wrong here...


>The html has such tag, but redirect shows no effect. Tried with simple
>html link and it worked.
>Any ideas how to get to some anchor in the middle of the page?

Some browsers support the ?action#anchor syntax.  Some don't (particularly 
not too recent MSIE's).  If you really want to reliably do this, you should 
hide your parameter in the URL and use a RewriteRule or a mod_perl handler 
to extract the parameter, e.g. instead of 
"intranet.html?action=show#anchor_name" use a URL in the form 
"/show/intranet.html#anchor".


Hope this helps...


Elizabeth Mattijsen




Re: Long KeepAlives sensible?

2001-04-06 Thread Elizabeth Mattijsen

At 08:16 PM 4/6/01 -0700, Stas Bekman wrote:
> > I realise that KeepAlives are discouraged in combination with mod_perl.
> > However, for a project I'm working on, I would like to try to have
> > KeepAlive On to allow for _very_ quick response times.  I can set the
>Theo Schlossnagle at his talk (ApacheCon) yesterday has shown a setup
>where one uses Keep-Alives with mod_perl, where mod_backhand is used at
>the front end, instead of mod_proxy/squid. Basically the mod_perl
>processes keep a constant connection to the front end server and
>mod_backhand uses those connections as a pool. Check out
>http://www.backhand.org/mod_backhand/.

Interesting approach...  would be a good option for scaling up my approach 
when needed without the pooling of connections...


>I'm planning to add a section on this topic to the guide, but since I'm
>busy working on finishing the book it might take a while before I get to
>actually do it. So if anyone can write about it beforehand, that would be
>really cool to have. Thanks!

I'm preparing a presentation about this project for the Amsterdam Perl 
Mongers and/or YAPC::Europe in Amsterdam in August.  No doubt parts of that 
will be usable for the guide... but that's not now so if anyone else wants 
to have a go at it...  ;-)


Elizabeth Mattijsen




Re: Long KeepAlives sensible?

2001-04-06 Thread Elizabeth Mattijsen

At 02:36 PM 4/6/01 +0200, Elizabeth Mattijsen wrote:
>1. To facilitate memory management, I would like to have the apache child 
>terminate whenever the keep-alive connection is broken or has timed 
>out.  There does not seem to be a handler that will handle the end of a 
>keep-alive connection yet.  Is that correct?  Is there a way around it?

I just realised this is a non-issue.  Just do a Apache->child_terminate and 
the child will terminate at the end of the request, which _is_ at the end 
of the KeepAlive...


Elizabeth Mattijsen




Re: Long KeepAlives sensible?

2001-04-06 Thread Elizabeth Mattijsen

At 02:36 PM 4/6/01 +0200, Elizabeth Mattijsen wrote:
>I realise that KeepAlives are discouraged in combination with mod_perl.
>However, for a project I'm working on, I would like to try to have 
>KeepAlive On to allow for _very_ quick response times.  I can set the the 
>way the
^^
something weird happened there. It should have read (with ht reversed):

 >On to allow for _very_ quick response times.  I can set the 
Content-Lenght: >header for my output, so there is no problem there.  And 
the way the things

Somewhere the line with Content-Lenght got filtered...


Elizabeth Mattijsen




Long KeepAlives sensible?

2001-04-06 Thread Elizabeth Mattijsen

I realise that KeepAlives are discouraged in combination with mod_perl.

However, for a project I'm working on, I would like to try to have 
KeepAlive On to allow for _very_ quick response times.  I can set the 
Content-Length: header for my output, so there is no problem there.  And 
the way the things will be set up, is that every user only uses 1 
persistent connection (just for the html/xml only, other stuff will be 
retrieved from other servers).  I'm thinking of KeepAliveTimeouts of up to 
300 seconds.  You could think of it as session caching on a connection.

The data served consists of potentially many millions of newsgroup and 
mailing list messages (yes: we'll be doing Perl-related stuff also).  Some 
data (which newsgroups and mailinglist, which start of threads, etc) will 
be loaded at startup of the server and shared "globally" by all apache 
children.  Data pertaining to a particular newsgroup will be loaded on 
demand in the apache child handling the request.


I see the following potential problems with this approach:

1. To facilitate memory management, I would like to have the apache child 
terminate whenever the keep-alive connection is broken or has timed 
out.  There does not seem to be a handler that will handle the end of a 
keep-alive connection yet.  Is that correct?  Is there a way around it?

2. when a user initiates another request before the first request is 
finished, a different apache child will be used to service the request, 
thereby breaking the other persistent connection.  This may cause several 
children to co-exist with essentially the same data internally, but not 
being shared at the os memory level.


I guess my questions really are: does such an approach make sense?  Are 
there solutions to the problems mentioned above?  Are there any pitfalls 
that I didn't realise yet?


Any feedback would be greatly appreciated.


Elizabeth Mattijsen




Re: Very[OT]:Technical query re: scratchpad lookups for my() vars

2001-03-14 Thread Elizabeth Mattijsen

At 03:52 PM 3/14/01 -0800, Paul wrote:
But nothing about the structural/algorithmic mechanics. :<

 From the perlsub docs:

Variables declared with my are not part of any package and are therefore 
never fully qualified with the package name. In particular, you're not 
allowed to try to make a package variable (or other global) lexical:

 my $pack::var;  # ERROR!  Illegal syntax
 my $_;  # also illegal (currently)
In fact, a dynamic variable (also known as package or global variables) are 
still accessible using the fully qualified :: notation even while a lexical 
of the same name is also visible:

 package main;
 local $x = 10;
 my$x = 20;
 print "$x and $::x\n";
That will print out 20 and 10.

There is _no_ stash of lexicals during execution, only during 
compilation.  I guess one of the reasons lexicals are faster.


Elizabeth Mattijsen




Re: Deep recursion on subroutine "Apache::Constants::AUTOLOAD"

2000-04-25 Thread Elizabeth Mattijsen

At 12:59 4/25/00 -0600, Martin Lichtin wrote:
>Doug MacEachern wrote:
>> > Anyone understand why
>> > perl -we 'use Apache::Constants; Apache::Constants::OK();'
>> > causes this problem?
>> what version of mod_perl are you using? '
>mod_perl 1.21,  perl 5.00503

This reminds me of a problem that we had under mod_perl: I'm not sure it
exists in later versions, and if this has anything to do with your problem.
 Just in case it might help you, here is how this would go:

package MyPackage::SubModule;
sub new { my $self = {}; bless $self; $self }

package MyPackage;
sub new { my $self = {}; bless $self; $self }
sub SubModule { MyPackage::SubModule->new( @_ ) }
^  ^
For some reason with _some_ combinations of Perl and mod_perl, the
following code would recurse infinitely and cause the "Deep recursion"
error message:

$mypackage = new MyPackage;
$submodule = $mypackage->SubModule( @parameters );

However, if you would write SubModule like this:

sub SubModule { 'MyPackage::SubModule'->new( @_ ) }
^^
there was no problem.  This would indicate to me that there is some kind of
compile-time optimization in the version of Perl/mod_perl that incorrectly
assumes that "MyPackage::SubModule" is a subroutine reference, where in
fact it is a class reference.  By putting the class reference between
single quotes, the optimization is apparently by-passed and the problem
disappeared.


Hope this helps.


Elizabeth Mattijsen
Integra Netherlands



Re: Why I think mod_ssl should be in front-end

2000-02-03 Thread Elizabeth Mattijsen

At 14:51 2/3/00 -0500, Vivek Khera wrote:
>>>>>> "TM" == Tom Mornini <[EMAIL PROTECTED]> writes:
>TM> A fairly new option, I believe, and an excellent point.
>Not really.  I saw these boards available at least 2 years ago, which
>is about half the age of the Web ;-)

iPivot was a startup company recently acquired by Intel, see:

http://www.intel.com/network/products/ecommerce_equipment.htm


Elizabeth Mattijsen

Tel: 020-6005700Nieuwezijds Voorburgwal 68-70
Fax: 020-60018251012 SE  AMSTERDAM

Voor ernstige technische storingen zijn we buiten kantooruren
bereikbaar: 06-29500176 of zie onze website.

--
Web Ontwikkeling | Web Hosting | Web Onderhoud | Web Koppeling
--
 xxLINK, an Integra-Net company



Re: Using network appliance Filer with modperl

2000-02-03 Thread Elizabeth Mattijsen

Hi,

is I can step in here...  ;-)

At 15:11 2/3/00 +, Tim Bunce wrote:
>> As a front-end we have 'cheap' PC's running Linux. The disks in the PC's
>> are only used for the OS and temporary storage of logs, etc.
>What level of web traffic are you handling 'from' the netapp?
>E.g., how much traffic to the netapp is there when your web site
>is getting peak traffic?

As we put the maximum of RAM in our Linux boxes, in most cases we don't
notice anything in the NetApp traffic when a site gets hit badly.  For
example, we host one of the Dutch National newspapers (http://www.nrc.nl)
that way: because they come out with a daily edition around 4pm local time,
traffic varies from about 300 Kbit/sec during the day to about 2Mbit/sec
around the time the new update becomes available.  However, we can't see
anything special in the NetApp traffic graph at that time: it is all being
served from the front-end server RAM.  Since PC RAM is cheap, we can get a
lot of mileage out of our NetApp.

If we look at the total graph of NetApp traffic development of the past two
years, that graph has only risen about 25% from the original average
traffic.  However, our web-traffic has quadrupled over that period, and the
number of front-end servers now about 20 instead of the original 3.  And
the size of the NetApp has grown from 10 Gbyte to now about 45 Gbyte of
diskspace.

So I guess I would argue that maximum (relatively cheap) RAM in your
front-end servers is much more important than the maximum NetApp bandwidth...


Elizabeth Mattijsen

Tel: 020-6005700Nieuwezijds Voorburgwal 68-70
Fax: 020-60018251012 SE  AMSTERDAM

Voor ernstige technische storingen zijn we buiten kantooruren
bereikbaar: 06-29500176 of zie onze website.

--
Web Ontwikkeling | Web Hosting | Web Onderhoud | Web Koppeling
--
 xxLINK, an Integra-Net company



Re: Using network appliance Filer with modperl

2000-02-01 Thread Elizabeth Mattijsen

At 11:16 1/31/00 -0800, siberian wrote:
>My question is : Has anyone experienced any 'gotchas' in putting perl code
>that modperl handlers use on a Network Attached file server like a network
>appliance box ( www.netapp.com )? I am assuming that there are no real
>issues but before i go blow a ton of cash on this thing I wanted to be
>sure that no one had found a problem.

We have been using such a setup for over 2 years now.  The only real issue
we've found is not so much with mod_perl itself, but with MySQL.  If you
put your databases on the NetApp, either have a seperate central database
server, or make damn sure you do not use the same database from two
different front-end servers.  We've seen database corruption that way
(using Linux front-end servers with NFS 2).

With the regards of the single point of failure: only thing that failed so
far was one fan, which could be replace without shutting the NetApp down.
Also make sure that you go for the deal in which you get spare parts for
just about everything, so that you can fix any hardware problems yourself
very quickly.

With regards to fsck on large file systems: we've heard one horror stories
about that as well (with Xs4all here in Amsterdam).  I recall having read
that they fixed the problem with the fsck taking very long on large
file-systems with the latest OnTap release.


Elizabeth Mattijsen

Tel: 020-6005700Nieuwezijds Voorburgwal 68-70
Fax: 020-60018251012 SE  AMSTERDAM

Voor ernstige technische storingen zijn we buiten kantooruren
bereikbaar: 06-29500176 of zie onze website.

--
Web Ontwikkeling | Web Hosting | Web Onderhoud | Web Koppeling
--
 xxLINK, an Integra-Net company



Re: Dynamic page stats (was: Server Stats)

1999-10-06 Thread Elizabeth Mattijsen

At 10:29 10/6/99 -0700, Jim Serio wrote:
>How do those of you who run dynamic Web sites
>deal with stats. I'm particularily interested
>in those of you who use 1 or 2 scripts to
>generate the whole site and how to differentiate
>between what sections were viewed since a single
>script generated them all.

Use either mod_rewrite or a Translation handler to translate 
a URL in the form:

   http://www.site.com/clientname/documentnumber.htm

to:

   http://www.site.com/proccess-script?client=clientname&ID=documentnumber


You can then filter on the directory name to get the hits for
a particular client.


Elizabeth Mattijsen
xxLINK Internet Services