Re: File::Cache problem
At 11:56 AM 02/07/01 +0400, BeerBong wrote: And when cache size is exceeded all mod_perl processes are hanging. I had this happen to me a few days back on a test server. I thought I'd made a mistake by doing a rm -rf /tmp/File::Cache while the server was running (and while the File::Cache object was persistent). And another question (I'm use Linux Debian Potato)... Is there way to define params of the currently performing request You could use a USR2 hander in your code, if that's what you are asking. This is what my spinning httpd said the other day: [DIE] USR2 at /data/_g/lii/perl_lib/lib/5.00503/File/Spec/Unix.pm line 57 File::Spec::Unix::catdir('File::Spec', '') called at /data/_g/lii/perl_lib/lib/5.00503/File/Spec/Functions.pm line 41 File::Spec::Functions::__ANON__('') called at /data/_g/lii/perl_lib/lib/site_perl/5.005/File/Cache.pm line 862 File::Cache::_GET_PARENT_DIRECTORY('/') called at /data/_g/lii/perl_lib/lib/site_perl/5.005/File/Cache.pm line 962 But I haven't seen it happen since then. Bill Moseley mailto:[EMAIL PROTECTED]
Re: File::Cache problem
On Wed, Feb 07, 2001 at 04:02:39PM +0400, BeerBong wrote: Cache size after 24 hours of working via 'du -s' - 53M via 'perl -MFile::Cacje -e"print File::Cache::SIZE('PRTL')"' - 10M That looks like a serious bug. Can you double check on that? Where are you running the "du -s"? The SIZE( ) method returns the total size of the objects in the cache, not including the overhead of directories themselves. As the documentation explains, it isn't a perfect mechanism, but it should be a lot closer than you experienced. What is the best way to limit cache ? May be variant when File::Cache::REDUCE_SIZE(10485760) in cron.dayly will be better solution ? I this case max_size doesnt need to specify in object initialization params... I personally prefer using a cron job to periodically limit the size of the cache. I don't like using "max_size" because of the overhead involved on every set. I'd rather just fine tune the number of times a cron script is executed (i.e., once an hour vs. once a day). Another kind of problem appears recently. In logs [Wed Feb 7 15:42:57 2001] [error] Couldn't rename /tmp/File::Cache/.description.tmp13984 to /tmp/File::Cache/.description at /usr/web/inc/WebCache.pm line 15 [Wed Feb 7 15:44:37 2001] [error] Couldn't rename /tmp/File::Cache/apache/PRTL/b/5/a/b5a261aa13cf0ddfd41beb7d64960248.tmp13441 to /tmp/File::Cache/apache/PRTL/b/5/a/b5a261aa13cf0ddfd41beb7d64960248 at /usr/web/inc/WebCache.pm line 87 ...propagated at /usr/web/inc/Apache/Portal.pm line 109. What I can do ? Unfortunately the error message doesn't tell me enough. If you can, please modify lines 1578 and 1579 of Cache.pm to read as follows: rename ($temp_filename, $filename) or croak("Couldn't rename $temp_filename to $filename: $!"); The next time it hangs, the error message will have more information. I'll fix that in the next version of File::Cache. And another question (I'm use Linux Debian Potato)... Is there way to define params of the currently performing request (URI, script, etc) of hanging mod_perl process ? I browsed /proc/[PID of mod_perl]/, but I think this info is about first request. Am I wrong ? Are you looking in /proc/[pid]/environ for the request parameters? I'm surprised this information was available even for the first request. I don't believe environ is updated dynamically (I think it is filled out once on the exec of the process). It certainly won't provide the information on subsequent requests. In any case, the format and behavior of /proc is so dependent on the particular version of the Linux kernel that I wouldn't want to rely on it for anything more than occasional debugging. Two options for you would be to log the request every time via warn. It will fill your logs very quickly, but hopefully it will hang quickly and you can get the debugging information out. Another solution is to compile apache and mod_perl with debugging symbols and attach to it with "gdb". I have tried this a long time, though. Please see the current thread about using gdb. Best of luck! -DeWitt
Re[2]: File::Cache problem
Hello DeWitt, Wednesday, February 07, 2001, 6:51:24 PM, you wrote: DC On Wed, Feb 07, 2001 at 04:02:39PM +0400, BeerBong wrote: Cache size after 24 hours of working via 'du -s' - 53M via 'perl -MFile::Cacje -e"print File::Cache::SIZE('PRTL')"' - 10M DC That looks like a serious bug. Can you double check on that? Where DC are you running the "du -s"? maul:/tmp/File::Cache/apache# ls PRTL PRTLDev maul:/tmp/File::Cache/apache# du -s * 27924 PRTL 536 PRTLDev maul:/tmp/File::Cache/apache# perl -MFile::Cache -e"print File::Cache::SIZE('PRTL')" 4178662 maul:/tmp/File::Cache/apache# perl -MFile::Cache -e"File::Cache::REDUCE_SIZE(100,'PRTL')" maul:/tmp/File::Cache/apache# perl -MFile::Cache -e"print File::Cache:SIZE('PRTL')" 4178662 maul:/tmp/File::Cache/apache# DC The SIZE( ) method returns the total size of the objects in the cache, DC not including the overhead of directories themselves. As the DC documentation explains, it isn't a perfect mechanism, but it should be DC a lot closer than you experienced. I understand that, but it is not a problem. Why size of cache is not reduced ? There are not any warnings, just nothing happens. What is the best way to limit cache ? May be variant when File::Cache::REDUCE_SIZE(10485760) in cron.dayly will be better solution ? I this case max_size doesnt need to specify in object initialization params... DC I personally prefer using a cron job to periodically limit the size of DC the cache. I don't like using "max_size" because of the overhead DC involved on every set. I'd rather just fine tune the number of times DC a cron script is executed (i.e., once an hour vs. once a day). Another kind of problem appears recently. In logs [Wed Feb 7 15:42:57 2001] [error] Couldn't rename /tmp/File::Cache/.description.tmp13984 to /tmp/File::Cache/.description at /usr/web/inc/WebCache.pm line 15 [Wed Feb 7 15:44:37 2001] [error] Couldn't rename /tmp/File::Cache/apache/PRTL/b/5/a/b5a261aa13cf0ddfd41beb7d64960248.tmp13441 to /tmp/File::Cache/apache/PRTL/b/5/a/b5a261aa13cf0ddfd41beb7d64960248 at /usr/web/inc/WebCache.pm line 87 ...propagated at /usr/web/inc/Apache/Portal.pm line 109. What I can do ? DC Unfortunately the error message doesn't tell me enough. If you can, DC please modify lines 1578 and 1579 of Cache.pm to read as follows: DCrename ($temp_filename, $filename) or DC croak("Couldn't rename $temp_filename to $filename: $!"); DC The next time it hangs, the error message will have more information. DC I'll fix that in the next version of File::Cache. Done... Wait for errors now. Sergey Polyakov aka "BeerBong" Chief of WebZavod http://www.webzavod.ru Tel. +7 (8462) 43-93-85 | +7 (8462) 43-93-86 mailto:[EMAIL PROTECTED]
File::Cache problem
Hello modperl, There is Apache Perl module, which uses File::Cache. File::Cache object is initialized in every request with params namespace PRTL max_size 10485760 expires_in 86400 cache_depth 3 Cache size after 24 hours of working via 'du -s' - 53M via 'perl -MFile::Cacje -e"print File::Cache::SIZE('PRTL')"' - 10M Count of cache nodes is several thousands. And when cache size is exceeded all mod_perl processes are hanging. In logs [Wed Feb 7 08:50:50 2001] [error] Undefined subroutine File::Cache::thaw called at /usr/local/lib/site_perl/File/Cache.pm line 601. What is the best way to limit cache ? May be variant when File::Cache::REDUCE_SIZE(10485760) in cron.dayly will be better solution ? I this case max_size doesnt need to specify in object initialization params... And another question (I'm use Linux Debian Potato)... Is there way to define params of the currently performing request (URI, script, etc) of hanging mod_perl process ? I browsed /proc/[PID of mod_perl]/, but I think this info is about first request. Am I wrong ? Sergey Polyakov aka "BeerBong" Chief of WebZavod http://www.webzavod.ru Tel. +7 (8462) 43-93-85 | +7 (8462) 43-93-86 mailto:[EMAIL PROTECTED]