On 1/31/06, zentara <[EMAIL PROTECTED]> wrote:
>
> >Since the actions of 'fork' and 'tie' happen so frequently,is there any 
> >performance drawback to the program?thanks again.
>
> Well, in reality, probably no one reading this list knows for sure.
> Set up your script and run it, and see if it seems to bog down.
>
> IPC thru shared memory is the fastest available, but it can cause some
> odd underlying problems (which you may, or may not see). The problem
> comes from the strict buffer sizes when using shared memory. In Perl
> we are used to saying "store this", and we know Perl handles it
> auto-magically.
> But when using shared memory, if the data is bigger than the memory
> segment assigned to store it, you may get bad results, ranging from your
> data being truncated, to it overrunning the data in
> the adjacent segment.  You can also get extra hex characters appended
> to your data, if your data is shorter than the segment size. Now I don't
> know how the various modules handle this, but it is a big problem.
> Which is why I usually just go with threads and shared variables,
> although I usually don't care too much about speed, and I seldom
> have to deal with "forking-on-demand". Threads are better suited when
> you know the max number of threads to be used, then you can declare
> all the shared variables.
>
> So if you can be sure that your data won't exceed the predefined shared
> memory segment sizes, it will probably work well for you. You also could
> work out a scheme to save long data across multiple segments.
>

Can you explain this a little more? I haven't done much with
IPC::Shareable or Share::Light--but it looks like I'm about to get
into them for a project, and I have two questions.

First, It's my understanding that Shareable and ShareLite are complete
implementations that rely on SysV shared memory, not simple wrappers
for the system calls, i.e. they allocate space on the heap and/or
stack for themselves and then dynamically parcel out that space on
demand. The modules should do the heavy lifting of managing the
pointers and return the correct value to the caller. If you have to
manage everything yourself anyway, why not just use inline C to pass
the pointers around by hand instead of running the overhead of another
module load?

Second, under what circumstances would you get bad data? And are the
read-past and write-past errors you're describing in memory, or the
modules' internal data structures? Any modern OS should return SEGV on
any attempt to write past the end of the page. An atempt to read
beyond the end of the page should generate SEGV, too, although most of
the time it doesn't, it just returns fewer bytes than expected, except
on recent OBSD. Your data still shouldn't be truncated, though,
because if it was too long, the attempted overflow should have
produced a SEGV when the write attempt was made. Or does
IPC::Shareable trp SEGV internally and then return unpredictable
values to the caller? Allowing "extra bytes," i.e. whatever was lying
around in the buffer before, to be passed back to a caller strikes me
a pretty serious, potentially exploitable flaw, too. Or maybe I'm
misunderstanding what you wrote?

Thanks,

-- jay
--------------------------------------------------
This email and attachment(s): [  ] blogable; [ x ] ask first; [  ]
private and confidential

daggerquill [at] gmail [dot] com
http://www.tuaw.com  http://www.dpguru.com  http://www.engatiki.org

values of β will give rise to dom!

Reply via email to