> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of
> [EMAIL PROTECTED]
> Sent: Wednesday 21 March 2001 16:30
> To: [EMAIL PROTECTED]
> Subject: RE: [Perl-unix-users] memory usage of perl processes
>
>
>
>
>
> >I guess you could use 'top' in a unix window.
> >
> >Then kick off 1 drone only and look at the memory usage. Then * it by the
> >number of processes you expect.
> >55 is a lot of processes!!!! On Any unix system.
> >
> >Perhaps you could stagger the number of processes. so maybe spawn 30
> drones
> >and have each drone process 2 sites in serial, one after the other?
> >
> >I've seen some pretty large perl processes on very simillar spec machines
> as
> >yours.
> >
> >Marty
>
> >> -----Original Message-----
> >>
> >> I am designing a system to process almost 4000 remote sites in
> a nightly
> >> sweep. This process is controlled from a database which maintains site
> >> status in realtime (or at least that's the goal). I am
> >> attempting to fork
> >> off around 100 "drones" to process 100 concurrent sites. Each drone
> will
> >> need a connection to the database. In doing some impromptu
> testing I've
> >> had the following results...
> >>
> >> Writing a queen, who does nothing, and sets nothing (no variables or
> >> filehandles are open) except fork off drones, and writing a
> >> drone who only
> >> connects to the database and nothing else had the following
> >> results on this
> >> machine config:
> >>
> >> RedHat Linux 6.2, PIII 600, 256MB RAM, 256MB Swap - Anything more than
> 55
> >> drones and the system is entirely out of memory.
> >>
> >> Is Perl really using that much memory? There is approx 190MB of RAM
> free,
> >> and nearly ALL the swap space free when I kick off the Queen process.
> >>
> >> Do there results seem typical? Is there any way to optimize this?
> >>
> >> Thanks
> >>
> >> Chuck
>
> Marty,
>
> top is in fact what I have been using to watch it, but I have heard that
> you cannot fully trust it's memory reporting because of shared
> memory usage
> and other things, but of course that may be fixed.
>
> The production box this system will live on is same OS, but dual PIII 733,
> 1GB RAM, 2GB Swap BUT! there's always a but! it's not as though THIS
> system has full access to that box, another nightly system runs around the
> same time, so there will surely be overlap. I am looking to optimize this
> system as much as possible to avoid bringing the box down cause
> it's out of
> memory. The reason for wanting so many concurrent drones is because the
> process runs for several hours currently (using 100 plain FTP's launched
> from shell scripts) and we are adding a new site approx every 17
> hours - no
> joke. So we don't want to make it take any longer than it does
> already, in
> fact we'd like to be able to scale up if need be as the number of sites
> increase.
>
> I guess my real question here is: Is this much memory usage just
> a fact of
> life when using perl and fork()ing? If so, I can try to ask for more RAM,
> and size the production box accordingly. Obviously my testing is doing
> NOTHING compared to what will really be getting processed in the real
> system, so RAM usage is likely to much larger than it is in these tests -
> which is my fear. I intened to use good practices, close any handles, and
> free as many variables as possible before forking, etc. but I
> have fears of
> bringing this box to it's knees, so if anyone can offer some tips to
> optimize the system I am all ears!
>
> Thanks
>
> Chuck
Are you using any perl Benchmarking?
I noticed problems with memory leaks within code that was written under perl
5.005 which used benchmarking.
this seems to have been fixed with perl 5.6.
I was suggesting using top to monitor the memory usage of just the perl
process (setting top with the option M), should probably put your perl
process at the top of your list.
I use a pIII 550 test box with the following output from top
4:40pm up 50 days, 4:03, 4 users, load average: 2.08, 2.07, 2.09
219 processes: 105 sleeping, 3 running, 111 zombie, 0 stopped
CPU states: 8.0% user, 51.4% system, 40.5% nice, 0.0% idle
Mem: 257788K av, 231000K used, 26788K free, 61512K shrd, 6488K buff
Swap: 136040K av, 25220K used, 110820K free 18916K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
9580 root 0 0 14656 14M 1632 S 0 0.0 5.6 0:13 X
18000 news 4 4 13080 12M 1560 S N 0 0.0 5.0 0:02
ftpfeed.pl
18004 news 4 4 12644 12M 1368 S N 0 0.0 4.9 0:02
present.pl
18569 news 4 4 12492 12M 1368 S N 0 0.0 4.8 0:02
present.pl
18651 news 4 4 12108 11M 1560 S N 0 0.0 4.6 0:02
ftpfeed.pl
19119 news 4 4 12056 11M 1560 S N 0 0.0 4.6 0:02
ftpfeed.pl
18654 news 4 4 11608 11M 1368 S N 0 0.0 4.5 0:02
present.pl
19116 news 4 4 11548 11M 1368 S N 0 0.0 4.4 0:01
present.pl
18002 news 4 4 10756 10M 1364 S N 0 0.0 4.1 0:01
SNMPTrap.pl
18282 news 4 4 10756 10M 1364 S N 0 0.0 4.1 0:01
SNMPTrap.pl
10151 news 4 4 10752 10M 1364 S N 0 0.0 4.1 0:02
SNMPTrap.pl
18653 news 4 4 10752 10M 1364 S N 0 0.0 4.1 0:01
SNMPTrap.pl
20562 news 4 4 10752 10M 1364 S N 0 0.0 4.1 0:01
SNMPTrap.pl
I wou;dn't say any of the above processes are majorly big, but they do use a
lot of CPAN modules.
I don't know how this compares to your system, but they do use a decent
portion of memory,
Marty
_______________________________________________
Perl-Unix-Users mailing list. To unsubscribe go to
http://listserv.ActiveState.com/mailman/subscribe/perl-unix-users