It seems to be more than just the FS:
http://www.suse.de/~aj/linux_lfs.html
I would try mkfile or dd to create huge files to test FS limits.
How about disabling the logs when running load tests? This will solve the problem, but probably falsify your results as it may improve performance.
Regards, Hendrik
[EMAIL PROTECTED] wrote:
Ahh!
That would make sense.. We're acutally using EXT3 with RAID 1+0 on the filesystem which carries the logs.. Does reiserfs have these same barriers?
Thanks! jim
Yes, with a bind and ext2 and a misconfigured logrotate :-)
What filesystem are you using? Probably the squid log is hitting an FS-barrier.
[EMAIL PROTECTED] wrote:
Hey everyone!
Quick questions.. Two times in a row now, whie running load tests, Squid dies
as soon as the access.log file gets to 2 gigs with the following error message:
FATAL: logfileWRite: /usr/loca/squid/var/logs/access.log:(0) Success.
Has anyone ever seen this before?
I'm running on an HP-DL380 running RHEL 2.1 AS Update 2. There's definaltly
more than enought disk space.
Any thoughs?
Thanks! jim
