Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-14 Thread Roch - PAE

Right on. And you might want to capture this in a blog for
reference. The permalink will be quite useful.

We did have a use case for zil synchronicity which was a 
big user controlled transaction :

turn zil off
do tons of thing to the filesystem.
big sync
turn zil back on


[ ] Rename or remove zil_disable
[x] Implement zil synchronicity.
[ ] I see no problem the way it is currently.


As for a DB, if the log and data are on different pools (our
current best  practice) then I guess  that  DB corruption is
still possible  with zil_disable. With the case  of DB  on a
single  pool but  different filesystems,  better  insure you
have the same setting for both.

Notification of the Completion of a transtion may also leave
the bound of the host system. Never use zil_disable there.

This last issue applies to an NFS server. I have blog entry
coming up on that.

-r

Anton B. Rang writes:
   Also, (Richard can address this better than I) you may want to disable
   the ZIL or have your array ignore the write cache flushes that ZFS issues.
  
  The latter is quite a reasonable thing to do, since the array has
  battery-backed cache. 
  
  The ZIL should almost [b]never[/b] be disabled. The only reason I can
  think of is to determine whether a performance issue is caused by the
  ZIL. 
  
  Disabling the ZIL does not only disable the intent log; it also causes
  ZFS to renege on the contract that fsync(), O_SYNC, and friends ensure
  that data is safely stored. A mail server, for instance, relies on
  this contract to ensure that a message is on disk before acknowledging
  its reception; if the ZIL is disabled, incoming messages can be lost
  in the event of a system crash. A database relies on this contract to
  ensure that its log is on disk before modifying its tables; if the ZIL
  is disabled, the database may be damaged and uncoverable in the event
  of a system crash. 
  
  The ZIL is a necessary part of ZFS. Just because the ZFS file
  structure will be consistent after a system crash even with the ZIL
  disabled does not mean that disabling it is safe! 
   
   
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-14 Thread Bill Sommerfeld
On Thu, 2006-12-14 at 11:33 +0100, Roch - PAE wrote:
 We did have a use case for zil synchronicity which was a 
 big user controlled transaction :
 
   turn zil off
   do tons of thing to the filesystem.
   big sync
   turn zil back on

Yep.  The bulk of the heavy lifting on systems I run with ZFS is
conceptually of this form -- nightly builds of the solaris ON
consolidation.  Some of the tools used within the build may call fsync()
-- and this may be appropriate when they're operating on their own, but
within the context of the build, the fsync() is wasted effort which may
cause cpus to go idle.

Similarly, the bulk of the synchronous I/O done during the import of SMF
manifests early in boot after an install or upgrade are wasted effort..

- Bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-14 Thread Casper . Dik

Bill Sommerfeld wrote:
 Similarly, the bulk of the synchronous I/O done during the import of SMF
 manifests early in boot after an install or upgrade are wasted effort..

I've done hundreds of installs.  Empirically, my observation is that
the SMF manifest import scales well with processors.  In other words,
I don't notice it being I/O bound.  I suppose I could move my dtrace
boot analysis scripts to the first boot and verify...

My observation is not the same; I see it scaling with CPU speed.

It's not synchronous I/O; it's synchronous door calls to the svc.configd,
100s of them.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-13 Thread Kory Wheatley
The Luns will be on separate SPA controllersnot on all the same controller, 
so that's why I thought if we split our data on different disks and ZFS Storage 
Pools we would get better IO performance.  Correct?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-13 Thread Richard Elling

Kory Wheatley wrote:
The Luns will be on separate SPA controllersnot on all 
the same controller, so that's why I thought if we split 
our data on different disks and ZFS Storage Pools we would 
get better IO performance.  Correct?


The way to think about it is that, in general, for best
performance, you want all parts of the system operating
concurrently and the load spread randomly.  This leads to
designs with one zpool with multiple LUNs.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Anton B. Rang
Are you looking purely for performance, or for the added reliability that ZFS 
can give you?

If the latter, then you would want to configure across multiple LUNs in either 
a mirrored or RAID configuration. This does require sacrificing some storage in 
exchange for the peace of mind that any “silent data corruption” in the array 
or storage fabric will be not only detected but repaired by ZFS.

From a performance point of view, what will work best depends greatly on your 
application I/O pattern, how you would map the application’s data to the 
available ZFS pools if you had more than one, how many channels are used to 
attach the disk array, etc.  A single pool can be a good choice from an 
ease-of-use perspective, but multiple pools may perform better under certain 
types of load (for instance, there’s one intent log per pool, so if the intent 
log writes become a bottleneck then multiple pools can help). This also 
depends on how the LUNs are configured within the EMC array

If you can put together a test system, and run your application as a benchmark, 
you can get an answer. Without that, I don’t think anyone can predict which 
will work best in your particular situation.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Neil Perrin

Are you looking purely for performance, or for the added reliability that ZFS 
can give you?

If the latter, then you would want to configure across multiple LUNs in either 
a mirrored or RAID configuration. This does require sacrificing some storage in 
exchange for the peace of mind that any “silent data corruption” in the array 
or storage fabric will be not only detected but repaired by ZFS.


From a performance point of view, what will work best depends greatly on your 
application I/O pattern, how you would map the application’s data to the 
available ZFS pools if you had more than one, how many channels are used to 
attach the disk array, etc.  A single pool can be a good choice from an 
ease-of-use perspective, but multiple pools may perform better under certain 
types of load (for instance, there’s one intent log per pool, so if the intent 
log writes become a bottleneck then multiple pools can help).


Bad example, as there's actually one intent log per file system!


This also depends on how the LUNs are configured within the EMC array

If you can put together a test system, and run your application as a benchmark, 
you can get an answer. Without that, I don’t think anyone can predict which 
will work best in your particular situation.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Kory Wheatley
Were looking for pure performance.

What will be contained in the LUNS is Student User account files that they will 
access and Department Share files like, MS word documents, excel files, PDF.  
There will be no applications on the ZFS Storage pools or pool   Does this help 
on what strategy might be best?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Kory Wheatley
Also there will be no NFS services on this system.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Anton B. Rang
 Were looking for pure performance.
 
 What will be contained in the LUNS is Student User
 account files that they will access and Department
 Share files like, MS word documents, excel files,
 PDF.  There will be no applications on the ZFS
 Storage pools or pool   Does this help on what
 strategy might be best?

I think so.

I would suggest striping a single pool across all available LUNs, then. (I'm 
presuming that you would be prepared to recover from ZFS-detected errors by 
reloading from backup.) There doesn't seem any compelling reason to split your 
storage into multiple pools, and by using a single pool, you don't have to 
worry about reallocating storage if one pool fills up while another has free 
space.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss