Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)

2006-08-01 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Neil Perrin wrote:
 I suppose if you know
 the disk only contains zfs slices then write caching could be
 manually enabled using format -e - cache - write_cache - enable

When will we have write cache control over ATA/SATA drives? :-).

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRM+mzZlgi5GaxT1NAQL8oAP7BJEUzDlMhVGt5j3IKcNc2Q8TCyUwAn4k
yWBCEmXdyBdpbRyoUnr6jlsn4QceC6/weYl/0H9df+eUibitu5QwWq4zRwFLUrqB
BkgdIdgECmOt9u6Y6uAEFRGKlMQUU5ZVNuJKDgfIsJSlsvxD1f5ddKx74ZZpFqmx
d9IVFK/KzQ0=
=YqXY
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-28 Thread George Wilson

Robert,

The patches will be available sometime late September. This may be a 
week or so before s10u3 actually releases.


Thanks,
George

Robert Milkowski wrote:

Hello eric,

Thursday, July 27, 2006, 4:34:16 AM, you wrote:

ek Robert Milkowski wrote:


Hello George,

Wednesday, July 26, 2006, 7:27:04 AM, you wrote:


GW Additionally, I've just putback the latest feature set and bugfixes 
GW which will be part of s10u3_03. There were some additional performance

GW fixes which may really benefit plus it will provide hot spares support.
GW Once this build is available I would highly recommend that you guys take
GW it for a spin (works great on Thumper).

I guess patches will be released first (or later).
Can you give actual BUG IDs especially those related to performance?


 


ek For U3, these are the performance fixes:
ek 6424554 full block re-writes need not read data in
ek 6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue
ek parallelIOs when fsyncing
ek 6447377 ZFS prefetch is inconsistant
ek 6373978 want to take lots of snapshots quickly ('zfs snapshot -r')

ek you could perhaps include these two as well:
ek 4034947 anon_swap_adjust() should call kmem_reap() if availrmem is low.
ek 6416482 filebench oltp workload hangs in zfs

ok, thank you.
Do you know if patches for S10 will be released before U3?

ek There won't be anything in U3 that isn't already in nevada...

I know that :)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] How to best layout our filesystems

2006-07-27 Thread Robert Milkowski
Hello eric,

Thursday, July 27, 2006, 4:34:16 AM, you wrote:

ek Robert Milkowski wrote:

Hello George,

Wednesday, July 26, 2006, 7:27:04 AM, you wrote:


GW Additionally, I've just putback the latest feature set and bugfixes 
GW which will be part of s10u3_03. There were some additional performance
GW fixes which may really benefit plus it will provide hot spares support.
GW Once this build is available I would highly recommend that you guys take
GW it for a spin (works great on Thumper).

I guess patches will be released first (or later).
Can you give actual BUG IDs especially those related to performance?


  

ek For U3, these are the performance fixes:
ek 6424554 full block re-writes need not read data in
ek 6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue
ek parallelIOs when fsyncing
ek 6447377 ZFS prefetch is inconsistant
ek 6373978 want to take lots of snapshots quickly ('zfs snapshot -r')

ek you could perhaps include these two as well:
ek 4034947 anon_swap_adjust() should call kmem_reap() if availrmem is low.
ek 6416482 filebench oltp workload hangs in zfs

ok, thank you.
Do you know if patches for S10 will be released before U3?

ek There won't be anything in U3 that isn't already in nevada...

I know that :)


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] How to best layout our filesystems

2006-07-27 Thread Bennett, Steve
Eric said:
 For U3, these are the performance fixes:
 6424554 full block re-writes need not read data in
 6440499 zil should avoid txg_wait_synced() and use dmu_sync() 
 to issue 
 parallelIOs when fsyncing
 6447377 ZFS prefetch is inconsistant
 6373978 want to take lots of snapshots quickly ('zfs snapshot -r')
 
 you could perhaps include these two as well:
 4034947 anon_swap_adjust() should call kmem_reap() if 
 availrmem is low.
 6416482 filebench oltp workload hangs in zfs
 
 There won't be anything in U3 that isn't already in nevada...
Hi Eric,

Do S10U2 users have to wait for U3 to get these fixes, or are they going
to be released as patches before then?
I'm presuming that U3 is scheduled for early 2007...

Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-27 Thread Gary Combs




For S10U3, RR is 11/13/06 and GA is 11/27/06.

Gary

Bennett, Steve wrote:

  Eric said:
  
  
For U3, these are the performance fixes:
6424554 full block re-writes need not read data in
6440499 zil should avoid txg_wait_synced() and use dmu_sync() 
to issue 
parallelIOs when fsyncing
6447377 ZFS prefetch is inconsistant
6373978 want to take lots of snapshots quickly ('zfs snapshot -r')

you could perhaps include these two as well:
4034947 anon_swap_adjust() should call kmem_reap() if 
availrmem is low.
6416482 filebench oltp workload hangs in zfs

There won't be anything in U3 that isn't already in nevada...

  
  Hi Eric,

Do S10U2 users have to wait for U3 to get these fixes, or are they going
to be released as patches before then?
I'm presuming that U3 is scheduled for early 2007...

Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 

  

  
   Gary Combs 
Product Architect
  
  Sun Microsystems, Inc.
3295 NW 211th Terrace
Hillsboro, OR 97124 US
Phone x32604/+1 503 715 3517
Fax 503-715-3517
Email [EMAIL PROTECTED]
  


  
  
"The box said 'Windows 2000 Server or better', so I installed Solaris"
  
  
  


  

  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Brian Hechinger
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:
 
 If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
 it label and use the disks, it will automatically turn on the write
 cache for you.

What if you can't give ZFS whole disks?  I run snv_38 on the Optiplex
GX620 on my desk at work and I run snv_40 on the Latitude D610 that I
carry with me.  In both cases the machines only have one disk, so I need
to split it up for UFS for the OS and ZFS for my data.  How do I turn on
write cache for partial disks?

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Neil Perrin



Brian Hechinger wrote On 07/26/06 06:49,:

On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:


If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
it label and use the disks, it will automatically turn on the write
cache for you.



What if you can't give ZFS whole disks?  I run snv_38 on the Optiplex
GX620 on my desk at work and I run snv_40 on the Latitude D610 that I
carry with me.  In both cases the machines only have one disk, so I need
to split it up for UFS for the OS and ZFS for my data.  How do I turn on
write cache for partial disks?

-brian


You can't enable write caching for just part of the disk.
We don't enable it for slices because UFS (and other
file systems) doesn't do write cache flushing and so
could get corruption on power failure. I suppose if you know
the disk only contains zfs slices then write caching could be
manually enabled using format -e - cache - write_cache - enable

Neil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)

2006-07-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Neil Perrin wrote:
 I suppose if you know
 the disk only contains zfs slices then write caching could be
 manually enabled using format -e - cache - write_cache - enable

When will we have write cache control over ATA/SATA drives? :-).

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRMehLplgi5GaxT1NAQJKTwP/UmC3RIsOu+CygedrepaDqAXRyL4AzTpZ
qpLR1XdS9Q01EuYx+SoPeFD//3QOUPAS+5gU1i7ZPBoLHx2ErkvcaxICYtecvoMD
aIJW2vGvApEipLPLU6zlDjjhM3LlKb96x03ElpRvOmdM1FL0IV1RqGSzVJ+e2Uo7
fPKfpzhZESI=
=5NBk
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Brian Hechinger
On Wed, Jul 26, 2006 at 08:38:16AM -0600, Neil Perrin wrote:
 
 
 GX620 on my desk at work and I run snv_40 on the Latitude D610 that I
 carry with me.  In both cases the machines only have one disk, so I need
 to split it up for UFS for the OS and ZFS for my data.  How do I turn on
 write cache for partial disks?
 
 -brian
 
 You can't enable write caching for just part of the disk.
 We don't enable it for slices because UFS (and other
 file systems) doesn't do write cache flushing and so
 could get corruption on power failure. I suppose if you know
 the disk only contains zfs slices then write caching could be
 manually enabled using format -e - cache - write_cache - enable

Eh, I guess I'll skip it then.  ;)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread eric kustarz

Robert Milkowski wrote:


Hello George,

Wednesday, July 26, 2006, 7:27:04 AM, you wrote:


GW Additionally, I've just putback the latest feature set and bugfixes 
GW which will be part of s10u3_03. There were some additional performance

GW fixes which may really benefit plus it will provide hot spares support.
GW Once this build is available I would highly recommend that you guys take
GW it for a spin (works great on Thumper).

I guess patches will be released first (or later).
Can you give actual BUG IDs especially those related to performance?


 


For U3, these are the performance fixes:
6424554 full block re-writes need not read data in
6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue 
parallelIOs when fsyncing

6447377 ZFS prefetch is inconsistant
6373978 want to take lots of snapshots quickly ('zfs snapshot -r')

you could perhaps include these two as well:
4034947 anon_swap_adjust() should call kmem_reap() if availrmem is low.
6416482 filebench oltp workload hangs in zfs

There won't be anything in U3 that isn't already in nevada...

happy performing,
eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to best layout our filesystems

2006-07-25 Thread Karen Chau
Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory.  The parser untars the tarball which consists of
8 ascii files into the /archives directory.  /app is our application and
tools (apache, tomcat, etc) directory.  We also have batch jobs that run
throughout the day, I would say we read 2 to 3 times more than we write.

Since we have an alternate server, downtime or data lost is somewhat
acceptable.  How can we best layout our filesystems to get the most
performance.

directory info
--
/app  - 30G
/upload   - 10G
/archives - 35G

HW info
---
System Configuration:  Sun Microsystems  sun4v Sun Fire T200
System clock frequency: 200 MHz
Memory size: 8184 Megabytes
CPU: 32 x 1000 MHz  SUNW,UltraSPARC-T1
Disks: 4x68G
  Vendor:   FUJITSU
  Product:  MAV2073RCSUN72G
  Revision: 0301


We plan on using 1 disk for OS, the others 3 disks for canary
filesystems, /app, /upload, and /archives.  Should I create 3 pools, ie
   zpool create canary_app c1t1d0
   zpool create canary_upload c1t2d0
   zpool create canary_archives c1t3d0

--OR--
create 1 pool using dynamic stripe, ie
   zpool create canary c1t1d0 c1t2d0 c1t3d0

--OR--
create a single-parity raid-z pool, ie.
   zpool create canary raidz c1t1d0 c1t2d0 c1t3d0

Which option gives us the best performance?  If there's another method
that's not mentioned, please let me know.

Also, should be enable read/write cache on the OS as well as the other
disks?

Is build 9 in S10U2 RR??  If not, please point me to the OS image on
nana.eng.


Thanks,
karen


-- 

NOTICE:  This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information.  Any unauthorized
review, use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and destroy all
copies of the original message.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-25 Thread Torrey McMahon
Given the amount of I/O wouldn't it make sense to get more drives 
involved or something that has cache on the front end or both? If you're 
really pushing the amount of I/O you're alluding too - Hard to tell 
without all the details - then you're probably going to hit a limitation 
on the drive IOPS. (Even with the cache on.)


Karen Chau wrote:

Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory.  The parser untars the tarball which consists of
8 ascii files into the /archives directory.  /app is our application and
tools (apache, tomcat, etc) directory.  We also have batch jobs that run
throughout the day, I would say we read 2 to 3 times more than we write.

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to best layout our filesystems

2006-07-25 Thread Sean Meighan




Hi Torrey; we are the cobblers kids. We borrowed this T2000 from
Niagara engineering after we did some performance tests for them. I am
trying to get a thumper to run this data set. This could take up to 3-4
months. Today we are watching 750 Sun Ray servers and 30,000 employees.
Lets see
1) Solaris 10
2) ZFS version 6
3) T2000 32x1000 with the poorer performing drives that come with the
Niagara

We need a short term solution. Niagara engineering has given us two
more of the internal drives so we can max out the Niagara with 4
internal drives. This is the hardware we need to use this week. . When
we get a new box, more drives we will reconfigure.

Our graphs have 5000 data points per month, 140 data points per day. we
can stand to lose data.

my suggestion was one drive as the system volume and the remaining
three drives as one big zfs volume , probably raidz.

thanks
sean


Torrey McMahon wrote:
Given the
amount of I/O wouldn't it make sense to get more drives involved or
something that has cache on the front end or both? If you're really
pushing the amount of I/O you're alluding too - Hard to tell without
all the details - then you're probably going to hit a limitation on the
drive IOPS. (Even with the cache on.)
  
  
Karen Chau wrote:
  
  Our application Canary has approx 750 clients
uploading to the server

every 10 mins, that's approx 108,000 gzip tarballs per day writing to

the /upload directory. The parser untars the tarball which consists of

8 ascii files into the /archives directory. /app is our application
and

tools (apache, tomcat, etc) directory. We also have batch jobs that
run

throughout the day, I would say we read 2 to 3 times more than we
write.


 
  


-- 

  

  
   Sean Meighan 
Mgr ITSM Engineering
  
  Sun Microsystems, Inc.
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]
  
  

  


NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution is prohibited.
If you are not the intended recipient, please contact the
sender by reply email and destroy all copies of the original message.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss