Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-08 Thread Roch - PAE
Manoj Joseph writes:
  Matt B wrote:
   Any thoughts on the best practice points I am raising? It disturbs me
   that it would make a statement like don't use slices for
   production.
  
  ZFS turns on write cache on the disk if you give it the entire disk to 
  manage. It is good for performance. So, you should use whole disks when 
  ever possible.
  

Just a small clarification to state that the extra
performance  that comes from having the write cache on
applies mostly to disks that do not have other means of
command concurrency (NCQ, CTQ). With NCQ/CTQ, the write
cache setting should not matter much to ZFS performance.

-r

  Slices work too, but write cache for the disk will not be turned on by zfs.
  
  Cheers
  Manoj
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-08 Thread Robert Milkowski
Hello Manoj,

Thursday, March 8, 2007, 7:10:57 AM, you wrote:

MJ Ayaz Anjum wrote:

 2. with zfs mounted on one cluster node, i created a file and keeps it 
 updating every second, then i removed the fc cable, the writes are still 
 continuing to the file system, after 10 seconds i have put back the fc 
 cable and my writes continues, no failover of zfs happens.
 
 seems that all IO are going to some cache. Any suggestions on whts going 
 wrong over here and whts the solution to this.

MJ I don't know for sure. But my guess is, if you do a fsync after the 
MJ writes and wait for the fsync to complete, then you might get some 
MJ action. fsync should fail. zfs could panic the node. If it does, you 
MJ will see a failover.

Exactly.
Files must be open with O_DSYNC of fdsync should be used.
If you don't then writes are expected to be buffered and later put to
disks which in your case has to fail.

If you want to guarantee that when your applications writes something
it's on stable storage then use proper semantics like shown above.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-08 Thread Robert Milkowski
Hello Matt,

Wednesday, March 7, 2007, 7:31:14 PM, you wrote:

MB So it sounds like the consensus is that I should not worry about using 
slices with ZFS
MB and the swap best practice doesn't really apply to my situation of a 4 disk 
x4200.

MB So in summary(please confirm) this is what we are saying is a
MB safe bet for using in a highly available production environment?

MB With 4x73 gig disks yielding 70GB each:

MB 5GB for root which is UFS and mirrored 4 ways using SVM.
MB 8GB for swap which is raw and mirrored across first two disks
MB (optional: or no liveupgrade and 4 way mirror this swap partition)
MB 8GB for LiveUpgrade which is mirrored across the third and fourth two disks
MB This leaves 57GB of free space on each of the 4 disks in slices
MB One zfs pool will be created containing the 4 slices
MB the first two slices will be used in a zmirror yielding 57GB
MB The last two slices will be used in a zmirror yielding 57GB
MB Then a zstripe (raid0) will be layed over the two zmirrors
MB yielding 114GB usable space while able to sustain any 2 drives failing 
without a loss in data

Eventually if you care about how much storage is available then:

1. 8GB on two disks for / in mirrored config (SVM)
2. 8GB on another two disks for SWAP in mirrored config (SVM)
3. the rest of the disks for zfs

   a. raidz2 4 slices, capacity of 2x slice, bad random read
  performance
   b. raid-10 4 slices, capacity of 2x slice, good read performance,
  less reliability than a.


You loose ability to do LU, but you gain some storage.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-08 Thread Selim Daoud

robert,
this applies only if you have full control on the application forsure
..but how do you do it if you don't own the application ... can you
mount zfs with  forcedirectio flag ?
selim


On 3/8/07, Robert Milkowski [EMAIL PROTECTED] wrote:

Hello Manoj,

Thursday, March 8, 2007, 7:10:57 AM, you wrote:

MJ Ayaz Anjum wrote:

 2. with zfs mounted on one cluster node, i created a file and keeps it
 updating every second, then i removed the fc cable, the writes are still
 continuing to the file system, after 10 seconds i have put back the fc
 cable and my writes continues, no failover of zfs happens.

 seems that all IO are going to some cache. Any suggestions on whts going
 wrong over here and whts the solution to this.

MJ I don't know for sure. But my guess is, if you do a fsync after the
MJ writes and wait for the fsync to complete, then you might get some
MJ action. fsync should fail. zfs could panic the node. If it does, you
MJ will see a failover.

Exactly.
Files must be open with O_DSYNC of fdsync should be used.
If you don't then writes are expected to be buffered and later put to
disks which in your case has to fail.

If you want to guarantee that when your applications writes something
it's on stable storage then use proper semantics like shown above.


--
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[4]: [zfs-discuss] writes lost with zfs !

2007-03-08 Thread Robert Milkowski
Hello Selim,

Thursday, March 8, 2007, 8:08:50 PM, you wrote:

SD robert,
SD this applies only if you have full control on the application forsure
SD ..but how do you do it if you don't own the application ... can you
SD mount zfs with  forcedirectio flag ?

No

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-08 Thread Roch Bourbonnais


Le 8 mars 07 à 20:08, Selim Daoud a écrit :


robert,
this applies only if you have full control on the application forsure
..but how do you do it if you don't own the application ... can you
mount zfs with  forcedirectio flag ?
selim



ufs directio and O_DSYNC are different things.
Would a forcesync flag be something of interest to the community ?

-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-08 Thread Bruce Shaw
Would a forcesync flag be something of interest to the community ? 

Yes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-08 Thread Selim Daoud

it's an absolute necessity

On 3/8/07, Roch Bourbonnais [EMAIL PROTECTED] wrote:


Le 8 mars 07 à 20:08, Selim Daoud a écrit :

 robert,
 this applies only if you have full control on the application forsure
 ..but how do you do it if you don't own the application ... can you
 mount zfs with  forcedirectio flag ?
 selim


ufs directio and O_DSYNC are different things.
Would a forcesync flag be something of interest to the community ?

-r


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] update on zfs boot support

2007-03-08 Thread Ian Collins
Lin Ling wrote:

 Ian Collins wrote:

 Thanks for the heads up.

 I'm building a new file server at the moment and I'd like to make sure I
 can migrate to ZFS boot when it arrives.

 My current plan is to create a pool on 4 500GB drives and throw in a
 small boot drive.

 Will I be able to drop the boot drive and move / over to the pool when
 ZFS boot ships?
   


 Yes, should be able to, given that you have already had an UFS boot
 drive running root.

Thanks.

As I intend setting up my pool as a striped mirror, it looks from the
the other postings like this will not be suitable for the boot device.

So an SVM mirror on a couple of small drives may still be the best bet
for a small sever.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss