Mark Bennett <[email protected]> wrote: > A big file created in /tmp can have fatal consequences. > > While testing disk performance I have encountered what appears to be a > vulnerability/limitation/unexpected result. > > The test was done on oi_147, but is likely to be the same on all nv releases, > and possibly in Solaris as well. > > Test Scenario: > > Make a big file in /tmp > > /usr/gnu/bin/dd if=/dev/zero of=/tmp/1 bs=1k count=500000 > > this size is fine, producing 512Mb file > > increasing the count to 5000000 results in an unresponsive system that never > recovers.
This is a general Solaris problem that is present since a long time. The problem is that paging in again from /tmp is painful slow and in addition that writing to /tmp pages out text pages for running programs. Even a real time program like cdrecord is not able to grant real time responses in case that you e.g. put a CD/DVD/BluRay disk image into /tmp (in case this image is larger than the avaulable RAM) and in case you try to write this image to an optical medium. Jörg -- EMail:[email protected] (home) Jörg Schilling D-13353 Berlin [email protected] (uni) [email protected] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily _______________________________________________ opensolaris-code mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/opensolaris-code
