Ants,
Thanks for comments. It seems you said the point. Right now, I can 
load 16MB data into a 32MB seg. But if I want to load data more than 
32MB, the error came up. I think it is the semophore problem, because 
from the debug log, I saw that the shmseg is allocated successful. 
Then I tried to untie the variable, it was not successful due to 
semophore problem (I have specified destroy=>1). I have to remove via 
ipcrm. What I still don't quite understand is what is "The Thingy 
Referenced". Would you let me know what man page you are talking 
about? Thanks again!

-Martin

>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<

On 10/8/99, 3:06:58 AM, "Anthony Gardner" <[EMAIL PROTECTED]> wrote 
regarding Re: Urgent--Any limit on Hash table in Shared Memory?:


> The only problem I had when creating a hash with loads of data using
> IPC::Shareable was that I constructed the hash incorrectly.

> To begin with, I was using, quoting from the man page, "The Thingy
> Referenced Is Initially False". I was then runnning out of sems and 
mem
> chunks. I then changed my attack to "The Thingy Referenced Is 
Initially
> True". This solved my problem.

> If you've already sussed this then I have no more info for you and 
I've
> wasted your time with this mail.

> -Ants.


> >From: <[EMAIL PROTECTED]>
> >To: [EMAIL PROTECTED]
> >Subject: Urgent--Any limit on Hash table in Shared Memory?
> >Date: Thu, 07 Oct 1999 19:50:50 GMT
> >
> >I used IPC::shareable module to construct a nested HASH table in
> >shared memory. It worked fine during "on-demand" test. When I move
> >from "on-demand" to "preload", An error came up saying that "No space
> >left on device". The machine has 0.5GB menory and most is still
> >available. Each entry in the Hash table is less than 1K. The logfile 
I
> >printed out during Httpd start indicates that each time, this error
> >shows up when the 128th entry is constructed. So I wonder whether
> >there is any limitation on Shared Hash table and whether there is a
> >way around this problem.
> >
> >Right now, each buffer_size is set to SHMMAX (32M). total 4096 seg is
> >allowed.
> >Another quick question is that whether there is a way to get info on
> >how many share memory segments has been allocated and how many
> >physical memory still available?
> >
> >Any suggestion is appreciated!
> >
> >-Martin
> >
> >
> >
> >

> ______________________________________________________
> Get Your Private, Free Email at http://www.hotmail.com


Reply via email to