Mark Bennett <mark.benn...@public.co.nz> wrote:

> A big file created in /tmp can have fatal consequences.
>
> While testing disk performance I have encountered what appears to be a 
> vulnerability/limitation/unexpected result.
>
> The test was done on oi_147, but is likely to be the same on all nv releases, 
> and possibly in Solaris as well. 
>
> Test Scenario:
>
> Make a big file in /tmp
>
>     /usr/gnu/bin/dd if=/dev/zero of=/tmp/1 bs=1k count=500000
>
> this size is fine, producing 512Mb file
>
> increasing the count to 5000000 results in an unresponsive system that never 
> recovers.

This is a general Solaris problem that is present since a long time.

The problem is that paging in again from /tmp is painful slow and in addition 
that 
writing to /tmp pages out text pages for running programs.

Even a real time program like cdrecord is not able to grant real time responses 
in case that you e.g. put a CD/DVD/BluRay disk image into /tmp (in case this 
image is larger than the avaulable RAM) and in case you try to write this image 
to an optical medium.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
       j...@cs.tu-berlin.de                (uni)  
       joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
opensolaris-code mailing list
opensolaris-code@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code

Reply via email to