BTW, high water mark method is not perfect, here is some for Novell support of 
water mark...
best,
z

http://www.novell.com/coolsolutions/tools/16991.html

Based on my own belief that there had to be a "better way" and the number of 
issues I'd seen reported in the Support Forums, I spent a lot of time 
researching how different memory settings affect the memory management and 
stability of the server. Based on that research I've made memory tuning 
recommendations to a large number of forum posters who were having memory 
tuning issues, and most of them have found their servers to be significantly 
more stable since applying the changes I recommended.

What follows are the formulas I developed for recommending memory tuning 
changes to a server. The formulas take a number of the values available from 
SEG.NLM (available from: http://www.novell.com/coolsolutions/tools/14445.html). 
To get the required values, load SEG.NLM, then from the main screen do '/', 
then 'Info', then 'Write SEGSTATS.TXT'. The SEGSTATS.TXT file will be created 
in SYS:SYSTEM.

SEG monitors the server and records a number of key memory statistics, my 
formulae take those statistics and recommend manual memory tuning parameters.

Note that as these are manual settings, auto tuning is disabled, and if the 
memory usage of the server changes significantly, then the server will need to 
be retuned to reflect the change in memory usage.

Also, after making the changes to use manual rather than auto tuning, the 
server may still recommend that the FCMS and "-u" memory settings be changed. 
These recommendations can be ignored. Following them will have the same effect 
as auto tuning, except you're doing it rather than the server doing it 
automatically - the same problems will still occur.

  ----- Original Message ----- 
  From: Tim 
  To: Nicholas Lee 
  Cc: zfs-discuss@opensolaris.org ; Sam 
  Sent: Wednesday, January 07, 2009 12:02 AM
  Subject: Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05





  On Tue, Jan 6, 2009 at 10:25 PM, Nicholas Lee <emptysa...@gmail.com> wrote:

    Since zfs is so smart is other areas is there a particular reason why a 
high water mark is not calculated and the available space not reset to this?


    I'd far rather have a zpool of 1000GB that said it only had 900GB but did 
not have corruption as it ran out of space.


    Nicholas


  WHAT??!?  Put artificial limits in place to prevent users from killing 
themselves?  How did that go Jeff?

  "I suggest that you retire to the safety of the rubber room while the rest of 
us enjoy these zfs features. By the same measures, you would advocate that 
people should never be allowed to go outside due to the wide open spaces.  
Perhaps people will wander outside their homes and forget how to make it back.  
Or perhaps there will be gravity failure and some of the people outside will be 
lost in space."

  It's NEVER a good idea to put a default limitation in place to protect a 
*regular user*.  If they can't RTFM from front cover to back they don't deserve 
to use a computer.

  --Tim



------------------------------------------------------------------------------


  _______________________________________________
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to