Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-27 Thread Volker A. Brandt
Hello Kyle!


 All of these mounts are failing at bootup with messages about
 non-existent mountpoints. My guess is that it's because when /etc/vfstab
 is running, the ZFS '/export/OSImages' isn't mounted yet?

Yes, that is absolutely correct.  For details, look at the start method
of svc:/system/filesystem/local:default, which lives in the script
/lib/svc/method/fs-local.  There you can see that ZFS is processed
after the vfstab.

 Any ideas?

The only way I could find was to set the mountpoint of the file system
to legacy, and add it to /etc/vfstab.  Here's an example:

  # ZFS legacy mounts:
  SHELOB/var - /var zfs - yes -
  SHELOB/opt - /opt zfs - yes -
  SHELOB/home - /home zfs - yes -
  #
  # -- loopback mount -- begin
  # loopback mount for /usr/local:
  /opt/local - /usr/locallofs - yes ro,nodevices
  /home/cvs - /opt/local/cvs lofs - yes rw,nodevices
  # -- loopback mount -- end

Before I added /home to vfstab, the loopback for /opt/local/cvs would
fail.


HTH -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Selim Daoud
the question is: does the IO pausing behaviour you noticed penalize
your application?
what are the consequences at the application level?

for instance we have seen application doing some kind of data capture
from external device (video for example) requiring a constant
throughput to disk (data feed), risking otherwise loss of data. in
this case qfs might be a better option (no free though)
if your application is not suffering, then you should be able to live
with this apparent io hangs

s-


On Thu, Mar 27, 2008 at 3:35 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
 My application processes thousands of files sequentially, reading
  input files, and outputting new files.  I am using Solaris 10U4.
  While running the application in a verbose mode, I see that it runs
  very fast but pauses about every 7 seconds for a second or two.  This
  is while reading 50MB/second and writing 73MB/second (ARC cache miss
  rate of 87%).  The pause does not occur if the application spends more
  time doing real work.  However, it would be nice if the pause went
  away.

  I have tried turning down the ARC size (from 14GB to 10GB) but the
  behavior did not noticeably improve.  The storage device is trained to
  ignore cache flush requests.  According to the Evil Tuning Guide, the
  pause I am seeing is due to a cache flush after the uberblock updates.

  It does not seem like a wise choice to disable ZFS cache flushing
  entirely.  Is there a better way other than adding a small delay into
  my application?

  Bob
  ==
  Bob Friesenhahn
  [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
  GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
--
Blog: http://fakoli.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS

2008-03-27 Thread Brandon Wilson
Hi all, here's a couple questions.

Has anyone run oracle databases off of a UFS formatted ZVOL? If so, how does it 
compare in speed to UFS direct io?

I'm trying my best to get rid of UFS, but ZFS isn't up to par on the speed of 
UFS direct io for MDBMS. So I'm trying to come up with some creative ways to 
get the ease of use of the zfs filesystem manager, and get the speed and 
functionality of UFS direct io. (Aren't we all? hehe)

Also, are UFS formatted ZVOLs officially supported by Sun?

Thanks,
Brandon Wilson
[EMAIL PROTECTED]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-27 Thread Kyle McDonald
Volker A. Brandt wrote:
 Hello Kyle!


   
 All of these mounts are failing at bootup with messages about
 non-existent mountpoints. My guess is that it's because when /etc/vfstab
 is running, the ZFS '/export/OSImages' isn't mounted yet?
 

 Yes, that is absolutely correct.  For details, look at the start method
 of svc:/system/filesystem/local:default, which lives in the script
 /lib/svc/method/fs-local.  There you can see that ZFS is processed
 after the vfstab.

   
Ok. So my theory was right. :)
 Any ideas?
 

 The only way I could find was to set the mountpoint of the file system
 to legacy, and add it to /etc/vfstab.  Here's an example:

   
I tried this last night also, after sending the message and I made it 
work. Seems clunky though.

I wonder if there is a technical reason why it has to be done in this order?

More importantly, I wonder if ZFS Boot will re-order this since the 
other FS's will all be ZFS.
(Actually I wonder what will be left in /etc/vfstab?)
   # ZFS legacy mounts:
   SHELOB/var - /var zfs - yes -
   SHELOB/opt - /opt zfs - yes -
   SHELOB/home - /home zfs - yes -
   #
   # -- loopback mount -- begin
   # loopback mount for /usr/local:
   /opt/local - /usr/locallofs - yes ro,nodevices
   /home/cvs - /opt/local/cvs lofs - yes rw,nodevices
   # -- loopback mount -- end

 Before I added /home to vfstab, the loopback for /opt/local/cvs would
 fail.


   
I'm guessing that /opt/local/cvs is *not* visible as /usr/local/cvs ???

  -Kyle

 HTH -- Volker
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Bob Friesenhahn
On Wed, 26 Mar 2008, Neelakanth Nadgir wrote:
 When you experience the pause at the application level,
 do you see an increase in writes to disk? This might the
 regular syncing of the transaction group to disk.

If I use 'zpool iostat' with a one second interval what I see is two 
or three samples with no write I/O at all followed by a huge write of 
100 to 312MB/second.  Writes claimed to be a lower rate are split 
across two sample intervale.

It seems that writes are being cached and then issued all at once. 
This behavior assumes that the file may be written multiple times so a 
delayed write is more efficient.

If I run a script like

while true
do
sync
done

then the write data rate is much more consistent (at about 
66MB/second) and the program does not stall.  Of course this is not 
very efficient.

Are the 'zpool iostat' statistics accurate?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Richard Elling
Selim Daoud wrote:
 the question is: does the IO pausing behaviour you noticed penalize
 your application?
 what are the consequences at the application level?

 for instance we have seen application doing some kind of data capture
 from external device (video for example) requiring a constant
 throughput to disk (data feed), risking otherwise loss of data. in
 this case qfs might be a better option (no free though)
 if your application is not suffering, then you should be able to live
 with this apparent io hangs

   

I would look at txg_time first... for lots of streaming writes on a machine
with limited memory writes you can smooth out the sawtooth.

QFS is open sourced. http://blogs.sun.com/samqfs
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Neelakanth Nadgir
Bob Friesenhahn wrote:
 On Wed, 26 Mar 2008, Neelakanth Nadgir wrote:
 When you experience the pause at the application level,
 do you see an increase in writes to disk? This might the
 regular syncing of the transaction group to disk.
 
 If I use 'zpool iostat' with a one second interval what I see is two 
 or three samples with no write I/O at all followed by a huge write of 
 100 to 312MB/second.  Writes claimed to be a lower rate are split 
 across two sample intervale.
 
 It seems that writes are being cached and then issued all at once. 
 This behavior assumes that the file may be written multiple times so a 
 delayed write is more efficient.
 

This does sound like the regular syncing.

 If I run a script like
 
 while true
 do
 sync
 done
 
 then the write data rate is much more consistent (at about 
 66MB/second) and the program does not stall.  Of course this is not 
 very efficient.
 

This causes the sync to happen much faster, but as you say, suboptimal.
Haven't had the time to go through the bug report, but probably
CR 6429205 each zpool needs to monitor its throughput
and throttle heavy writers
will help.

 Are the 'zpool iostat' statistics accurate?
 

Yes. You could also look at regular iostat
and correlate it.
-neel

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Bob Friesenhahn
On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:

 This causes the sync to happen much faster, but as you say, suboptimal.
 Haven't had the time to go through the bug report, but probably
 CR 6429205 each zpool needs to monitor its throughput
 and throttle heavy writers
 will help.

I hope that this feature is implemented soon, and works well. :-)

I tested with my application outputting to a UFS filesystem on a 
single 15K RPM SAS disk and saw that it writes about 50MB/second and 
without the bursty behavior of ZFS.  When writing to ZFS filesystem on 
a RAID array, zpool I/O stat reports an average (over 10 seconds) 
write rate of 54MB/second.  Given that the throughput is not much 
higher on the RAID array, I assume that the bottleneck is in my 
application.

 Are the 'zpool iostat' statistics accurate?

 Yes. You could also look at regular iostat
 and correlate it.

Iostat shows that my RAID array disks are loafing with only 9MB/second 
writes to each but with 82 writes/second.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ClearCase support for ZFS?

2008-03-27 Thread Nissim Ben-Haim
Hi,

Does anybody know what is the latest status with ClearCase support for ZFS?
I noticed this from IBM:   
http://www-1.ibm.com/support/docview.wss?rs=0uid=swg21155708

I would like to make sure someone has installed and tested it before 
recommending to a customer.

Regards,
Nissim Ben-Haim
Solution Architect

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-27 Thread Volker A. Brandt
  The only way I could find was to set the mountpoint of the file system
  to legacy, and add it to /etc/vfstab.  Here's an example:

 I tried this last night also, after sending the message and I made it
 work. Seems clunky though.

Yes, I also would have liked something more streamlined.  But since adding
entries to vfstab worked I did not pursue if further.

 I wonder if there is a technical reason why it has to be done in this order?

I can only guess that anything else would have been too complex.
The whole sequence seems to have room for improvement.  For example
in svc:/system/filesystem/root:default there are some checks to mount
optimized libc and hwcap libraries, and /usr is mounted, but not the
root fs (which I would have expected going by the FMRI name).

 More importantly, I wonder if ZFS Boot will re-order this since the
 other FS's will all be ZFS.

My guess is that the whole thing will be rewritten.

 (Actually I wonder what will be left in /etc/vfstab?)

Good question.  I would think that the file will still be around; it'll
have all the non-ZFS mount points, but the root fs will be mounted by
ZFS.

# ZFS legacy mounts:
SHELOB/var - /var zfs - yes -
SHELOB/opt - /opt zfs - yes -
SHELOB/home - /home zfs - yes -
#
# -- loopback mount -- begin
# loopback mount for /usr/local:
/opt/local - /usr/locallofs - yes ro,nodevices
/home/cvs - /opt/local/cvs lofs - yes rw,nodevices
# -- loopback mount -- end
 
  Before I added /home to vfstab, the loopback for /opt/local/cvs would
  fail.

 I'm guessing that /opt/local/cvs is *not* visible as /usr/local/cvs ???

Oh, but it is:

  shelob:/usr/local/cvs,3764# pwd
  /usr/local/cvs
  shelob:/usr/local/cvs,3765# ls
  CVSROOT  bbc  pkg  rjhb vab
  shelob:/usr/local/cvs,3766# df .
  Filesystemkbytesused   avail capacity  Mounted on
  SHELOB/home  916586496 74753216 42629431815%/home
  shelob:/usr/local/cvs,3767# mount | egrep 'local|cvs|home'
  /home on SHELOB/home read/write/setuid/devices/exec/xattr/atime/dev=4010004 
on Tue Mar 25 18:45:08 2008
  /usr/local on /opt/local read only/setuid/nodevices/dev=4010003 on Tue Mar 25 
18:45:08 2008
  /opt/local/cvs on /home/cvs read/write/setuid/nodevices/dev=4010004 on Tue 
Mar 25 18:45:08 2008

:-)


Regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS

2008-03-27 Thread Brandon Wilson
Well I don't have any hard numbers 'yet'. But sometime in the next couple weeks 
when the Hyperion Essbase install team get essbase up and running on a sun 
m4000, I plan on taking advantage of the situation to do some stress and 
performance testing on zfs and MDBMS. Stuff like ufs+directio, zfs, ufs 
formatted zvol, etc. It may be a while, but I'll post all the data once I get 
it. However, from what I've read, I'm sure ufs+directio will win on the 
performance testing. But I'm curious to this testing for myself.

Brandon Wilson
[EMAIL PROTECTED]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS

2008-03-27 Thread Richard Elling
Brandon Wilson wrote:
 Well I don't have any hard numbers 'yet'. But sometime in the next couple 
 weeks when the Hyperion Essbase install team get essbase up and running on a 
 sun m4000, I plan on taking advantage of the situation to do some stress and 
 performance testing on zfs and MDBMS. Stuff like ufs+directio, zfs, ufs 
 formatted zvol, etc. It may be a while, but I'll post all the data once I get 
 it. However, from what I've read, I'm sure ufs+directio will win on the 
 performance testing. But I'm curious to this testing for myself.
   

In my mind, managing a Zvol is about as easy as managing a
file.  Why not just use the Zvol directly?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Neal Pollack

For the last few builds of Nevada, if I come back to my workstation after
long idle periods such as overnight, and try any command that would touch
the zfs filesystem, it hangs for an entire 60 seconds approximately.

This would include ls,  zpool status, etc.

Does anyone has a hint as to how I wold diagnose this?
Or is it time for extreme measures such as zfs send to another server,
destroy, and rebuild a new zpool?

Config and stat:


Running Nevada build 85 and given;
# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2ONLINE   0 0 0
c2d0ONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0
c5d0ONLINE   0 0 0
c6d0ONLINE   0 0 0
c7d0ONLINE   0 0 0
c8d0ONLINE   0 0 0

errors: No known data errors


Also given:  I have been doing live upgrade every other build since
approx Nevada build 46.  I am running on a Sun Ultra 40 modified
to include 8 disks.  (second backplane and SATA quad cable)

It appears that the zfs filesystems are running version 1 and Nevada 
build 85
is running version 3.

zbit:~# zfs upgrade
This system is currently running ZFS filesystem version 3.

The following filesystems are out of date, and can be upgraded.  After being
upgraded, these filesystems (and any 'zfs send' streams generated from
subsequent snapshots) will no longer be accessible by older software 
versions.

VER  FILESYSTEM
---  
 1   tank
 1   tank/arc



Any hints at how to isolate and fix this would be appreciated.

Thanks,

Neal



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Tomas Ögren
On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes:

 Also given:  I have been doing live upgrade every other build since
 approx Nevada build 46.  I am running on a Sun Ultra 40 modified
 to include 8 disks.  (second backplane and SATA quad cable)
 
 It appears that the zfs filesystems are running version 1 and Nevada 
 build 85
 is running version 3.
 
 zbit:~# zfs upgrade
 This system is currently running ZFS filesystem version 3.

Umm. nevada 78 is at version 10.. so I don't think you've managed to
upgrade stuff 100% ;)

This system is currently running ZFS pool version 10.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
For more information on a particular version, including supported
releases, see:

http://www.opensolaris.org/os/community/zfs/version/N


/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Neal Pollack
Tomas Ögren wrote:
 On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes:

   
 Also given:  I have been doing live upgrade every other build since
 approx Nevada build 46.  I am running on a Sun Ultra 40 modified
 to include 8 disks.  (second backplane and SATA quad cable)

 It appears that the zfs filesystems are running version 1 and Nevada 
 build 85
 is running version 3.

 zbit:~# zfs upgrade
 This system is currently running ZFS filesystem version 3.
 

 Umm. nevada 78 is at version 10.. so I don't think you've managed to
 upgrade stuff 100% ;)

 This system is currently running ZFS pool version 10.
   

ZFS filesystem version is at 3
My zpool is at version 10

zbit:~# zpool upgrade
This system is currently running ZFS pool version 10.

All pools are formatted using this version.


 The following versions are supported:

 VER  DESCRIPTION
 ---  
  1   Initial ZFS version
  2   Ditto blocks (replicated metadata)
  3   Hot spares and double parity RAID-Z
  4   zpool history
  5   Compression using the gzip algorithm
  6   bootfs pool property
  7   Separate intent log devices
  8   Delegated administration
  9   refquota and refreservation properties
  10  Cache devices
 For more information on a particular version, including supported
 releases, see:

 http://www.opensolaris.org/os/community/zfs/version/N


 /Tomas
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread eric kustarz

On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote:
 On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:

 This causes the sync to happen much faster, but as you say,  
 suboptimal.
 Haven't had the time to go through the bug report, but probably
 CR 6429205 each zpool needs to monitor its throughput
 and throttle heavy writers
 will help.

 I hope that this feature is implemented soon, and works well. :-)

Actually, this has gone back into snv_87 (and no we don't know which  
s10uX it will go into yet).

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] kernel memory and zfs

2008-03-27 Thread Matt Cohen
We have a 32 GB RAM server running about 14 zones. There are multiple 
databases, application servers, web servers, and ftp servers running in the 
various zones.

I understand that using ZFS will increase kernel memory usage, however I am a 
bit concerned at this point.

[EMAIL PROTECTED]:~/zonecfg #mdb -k

Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip 
indmux ptm nfs ]

::memstat
Page Summary Pages MB %Tot

Kernel 4108442 16048 49%
Anon 3769634 14725 45%
Exec and libs 9098 35 0%
Page cache 29612 115 0%
Free (cachelist) 99437 388 1%
Free (freelist) 369040 1441 4%

Total 8385263 32754
Physical 8176401 31939

Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find 
out how much of that kernel memory is due to ZFS?

It just seems an excessively high amount of our memory is going to the kernel, 
even with ZFS being used on the server.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread abs
you may want to try disabling the disk write cache on the single disk.
also for the RAID disable 'host cache flush' if such an option exists.  that 
solved the problem for me.

let me know.


Bob Friesenhahn [EMAIL PROTECTED] wrote: On Thu, 27 Mar 2008, Neelakanth 
Nadgir wrote:

 This causes the sync to happen much faster, but as you say, suboptimal.
 Haven't had the time to go through the bug report, but probably
 CR 6429205 each zpool needs to monitor its throughput
 and throttle heavy writers
 will help.

I hope that this feature is implemented soon, and works well. :-)

I tested with my application outputting to a UFS filesystem on a 
single 15K RPM SAS disk and saw that it writes about 50MB/second and 
without the bursty behavior of ZFS.  When writing to ZFS filesystem on 
a RAID array, zpool I/O stat reports an average (over 10 seconds) 
write rate of 54MB/second.  Given that the throughput is not much 
higher on the RAID array, I assume that the bottleneck is in my 
application.

 Are the 'zpool iostat' statistics accurate?

 Yes. You could also look at regular iostat
 and correlate it.

Iostat shows that my RAID array disks are loafing with only 9MB/second 
writes to each but with 82 writes/second.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel memory and zfs

2008-03-27 Thread Richard Elling
Matt Cohen wrote:
 We have a 32 GB RAM server running about 14 zones. There are multiple 
 databases, application servers, web servers, and ftp servers running in the 
 various zones.

 I understand that using ZFS will increase kernel memory usage, however I am a 
 bit concerned at this point.

 [EMAIL PROTECTED]:~/zonecfg #mdb -k

 Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt 
 ip indmux ptm nfs ]

 ::memstat
 Page Summary Pages MB %Tot
 
 Kernel 4108442 16048 49%
 Anon 3769634 14725 45%
 Exec and libs 9098 35 0%
 Page cache 29612 115 0%
 Free (cachelist) 99437 388 1%
 Free (freelist) 369040 1441 4%

 Total 8385263 32754
 Physical 8176401 31939

 Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find 
 out how much of that kernel memory is due to ZFS?
   

The size of the ARC (cache) is available from kstat in the zfs
module (kstat -m zfs).  Neel wrote a nifty tool to track it over
time called arcstat.  See
http://www.solarisinternals.com/wiki/index.php/Arcstat

Remember that this is a cache and subject to eviction when
memory pressure grows.  The Solaris Internals books have
more details on how the Solaris virtual memory system works
and is recommended reading.
 -- richard


 It just seems an excessively high amount of our memory is going to the 
 kernel, even with ZFS being used on the server.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] nfs and smb performance

2008-03-27 Thread abs
hello all, 
i have two xraids connect via fibre to a poweredge2950.  the 2 xraids are 
configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes.  
these are striped across in zfs.  the read and write speeds local to the 
machine are as expected but i have noticed some performance hits in the read 
and write speed over nfs and samba.

here is the observation:

each filesystem is shared via nfs as well as samba.  
i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
i am able to only mount via nfs on a Mac OS 10.4.11 client. (there seems to be 
authentication/encryption issue between the 10.4.11 client and solaris box in 
this scenario. i know this is a bug on the client side)

when writing a file via nfs from the 10.5.2 client the speeds are 60 ~ 70 
MB/sec.
when writing a file via samba from the 10.5.2 client the speeds are 30 ~ 50 
MB/sec

when writing a file via nfs from the 10.4.11 client the speeds are 20 ~ 30 
MB/sec.

when writing a file via samba from a Windows XP client the speeds are 30 ~ 40 
MB.

i know that there is an implementational difference in nfs and samba on both 
Mac OS 10.4.11 and 10.5.2 clients but that still does not explain the Windows 
scenario.


i was wondering if anyone else was experiencing similar issues and if there is 
some tuning i can do or am i just missing something.  thanx in advance.

cheers, 
abs






   
-
Never miss a thing.   Make Yahoo your homepage.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel memory and zfs

2008-03-27 Thread Thomas Maier-Komor
Richard Elling wrote:
 
 The size of the ARC (cache) is available from kstat in the zfs
 module (kstat -m zfs).  Neel wrote a nifty tool to track it over
 time called arcstat.  See
 http://www.solarisinternals.com/wiki/index.php/Arcstat
 
 Remember that this is a cache and subject to eviction when
 memory pressure grows.  The Solaris Internals books have
 more details on how the Solaris virtual memory system works
 and is recommended reading.
  -- richard
 
 

The arcsize is also displayed in sysstat, which additionally shows a lot
more information in a 'top' like fashion. Get it here:
http://www.maier-komor.de/sysstat.html

- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs and smb performance

2008-03-27 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems




Hello abs

Would you be able to repeat the same tests for the cifs in zfs option
instead of using samba?
Would be interesting to see how the kernel cifs versus the samba
performance compare.

Peter

abs wrote:
hello all, 
i have two xraids connect via fibre to a poweredge2950. the 2 xraids
are configured with 2 raid5 volumes each, giving me a total of 4 raid5
volumes. these are striped across in zfs. the read and write speeds
local to the machine are as expected but i have noticed some
performance hits in the read and write speed over nfs and samba.
  
here is the observation:
  
each filesystem is shared via nfs as well as samba. 
i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
i am able to only mount via nfs on a Mac OS 10.4.11 client. (there
seems to be authentication/encryption issue between the 10.4.11 client
and solaris box in this scenario. i know this is a bug on the client
side)
  
when writing a file via nfs from the 10.5.2 client the speeds are 60 ~
70 MB/sec.
when writing a file via samba from the 10.5.2 client the speeds are 30
~ 50 MB/sec
  
when writing a file via nfs from the 10.4.11 client the speeds are 20 ~
30 MB/sec.
  
when writing a file via samba from a Windows XP client the speeds are
30 ~ 40 MB.
  
i know that there is an implementational difference in nfs and samba on
both Mac OS 10.4.11 and 10.5.2 clients but that still does not explain
the Windows scenario.
  
  
i was wondering if anyone else was experiencing similar issues and if
there is some tuning i can do or am i just missing something. thanx in
advance.
  
cheers, 
abs
  
  
  
  
  
   
  Never miss a thing. 
Make Yahoo your homepage.
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs and smb performance

2008-03-27 Thread Dale Ghent


Have you turned on the Ignore cache flush commands option on the  
xraids? You should ensure this is on when using ZFS on them.

/dale

On Mar 27, 2008, at 6:16 PM, abs wrote:
 hello all,
 i have two xraids connect via fibre to a poweredge2950.  the 2  
 xraids are configured with 2 raid5 volumes each, giving me a total  
 of 4 raid5 volumes.  these are striped across in zfs.  the read and  
 write speeds local to the machine are as expected but i have noticed  
 some performance hits in the read and write speed over nfs and samba.

 here is the observation:

 each filesystem is shared via nfs as well as samba.
 i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
 i am able to only mount via nfs on a Mac OS 10.4.11 client. (there  
 seems to be authentication/encryption issue between the 10.4.11  
 client and solaris box in this scenario. i know this is a bug on the  
 client side)

 when writing a file via nfs from the 10.5.2 client the speeds are 60  
 ~ 70 MB/sec.
 when writing a file via samba from the 10.5.2 client the speeds are  
 30 ~ 50 MB/sec

 when writing a file via nfs from the 10.4.11 client the speeds are  
 20 ~ 30 MB/sec.

 when writing a file via samba from a Windows XP client the speeds  
 are 30 ~ 40 MB.

 i know that there is an implementational difference in nfs and samba  
 on both Mac OS 10.4.11 and 10.5.2 clients but that still does not  
 explain the Windows scenario.


 i was wondering if anyone else was experiencing similar issues and  
 if there is some tuning i can do or am i just missing something.   
 thanx in advance.

 cheers,
 abs






 Never miss a thing. Make Yahoo your  
 homepage.___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss