hi,Zentara,
I know little about perl's thread,so I am not certain to make it run 
successfully under thread mode.
In fact,my program is not complicated.It accept the following  data-line from 
about 200 clients:

1_uee002##_01_100g836j:2048000:get

and use these elements to build a hash:

    my %records;
    my ($mid,$size,$type) = split(/:/,$line);
    my $timestamp = time();
    $records{$mid}->{$timestamp}+=$size;

When a request is coming,I want to fork a child process to handle it.

    my $child = fork();
    die "can't fork:$!" unless defined $child;
    if ($child==0)
    {
        do_something_to_hash();
    }

here,I want to modify the global hash %records in each child.These 
modifications include insert,modify,or delete items.

Now I just use single process to resolve the problem,since I have not get a way 
to get the variable be shared across multi-processes.and,I watched that when 
the server have run for one whole day,it used about 60M memory.

Following Zentara and Chas's advices,I know there are at least three ways to do 
that:

1)Threads;
2)IPC::Shareable;
3)tie the hash to DBM,and write to local filesystem.

I'll try those ways apartly.Thanks for all.



-----Original Message-----
>From: zentara <[EMAIL PROTECTED]>
>Sent: Jan 31, 2006 10:32 AM
>To: beginners@perl.org
>Subject: Re: how to share variable across multi-processes
>
>On Tue, 31 Jan 2006 11:38:14 +0800 (GMT+08:00), [EMAIL PROTECTED]
>(Jeff Pang) wrote:
>
>>I have seen this doc:
>>http://search.cpan.org/~bsugars/IPC-Shareable-0.60/lib/IPC/Shareable.pm
>>I'm interested in using the IPC::Shareable module.Does this method suit for 
>>my situation?Because there are so many socket requests coming into my socket 
>>program on everytime,when each request coming,I fork a child process to 
>>handle it.And,where in child process,I tie and lock the global variable %hash 
>>and do motification to it.
>
>Hi, see if it works for you.  Be sure to read the README section for
>"known problems".  There is also ShareLite.
>
>#!/usr/bin/perl -w
>use strict;
>use IPC::ShareLite;
>#--
>#-- server.pl
>#--
>my $share = new IPC::ShareLite(-key     => 1234,
>                               -create  => 'yes',
>                               -destroy => 'yes' ) or die $!;
>
>$share->store("Stored by sm server");
>
>while(1){sleep(10); print "Created sm 1234\n"}
>
>###############################################################
>
>#!/usr/bin/perl -w
>use strict;
>use IPC::ShareLite;
>#--
>#-- client.pl
>#--
>my $share = new IPC::ShareLite(-key => 1234, 
>                               -create => 'no',
>                               -destroy => 'no') or die $!;
>
>print "Get this from server: ",$share->fetch,"\n";
>############################################################
>
>>Since the actions of 'fork' and 'tie' happen so frequently,is there any 
>>performance drawback to the program?thanks again.
>
>Well, in reality, probably no one reading this list knows for sure.
>Set up your script and run it, and see if it seems to bog down.
>
>IPC thru shared memory is the fastest available, but it can cause some
>odd underlying problems (which you may, or may not see). The problem
>comes from the strict buffer sizes when using shared memory. In Perl
>we are used to saying "store this", and we know Perl handles it
>auto-magically. 
>But when using shared memory, if the data is bigger than the memory
>segment assigned to store it, you may get bad results, ranging from your
>data being truncated, to it overrunning the data in
>the adjacent segment.  You can also get extra hex characters appended
>to your data, if your data is shorter than the segment size. Now I don't
>know how the various modules handle this, but it is a big problem.
>Which is why I usually just go with threads and shared variables,
>although I usually don't care too much about speed, and I seldom
>have to deal with "forking-on-demand". Threads are better suited when
>you know the max number of threads to be used, then you can declare
>all the shared variables.
>
>So if you can be sure that your data won't exceed the predefined shared
>memory segment sizes, it will probably work well for you. You also could
>work out a scheme to save long data across multiple segments.
>
>
>
>
>-- 
>I'm not really a human, but I play one on earth.
>http://zentara.net/japh.html
>
>-- 
>To unsubscribe, e-mail: [EMAIL PROTECTED]
>For additional commands, e-mail: [EMAIL PROTECTED]
><http://learn.perl.org/> <http://learn.perl.org/first-response>
>
>


--
http://home.earthlink.net/~pangj/

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to