Re: [zfs-discuss] Auto backup and auto restore of ZFS via Firewire drive

2007-12-17 Thread Jim Klimov
It's good he didn't mail you, now we all know some under-the-hood details via 
Googling ;)

Thanks to both of you for this :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Robert Milkowski
Hello James,

Sunday, December 16, 2007, 9:54:18 PM, you wrote:

JCM hi Frank,

JCM there is an interesting pattern here (at least, to my
JCM untrained eyes) - your %b starts off quite low:


JCM Frank Penczek wrote:
JCM 
 ---
 dd'ing to NFS mount:
 [EMAIL PROTECTED]://tmp dd if=./file.tmp of=/home/fpz/file.tmp
 20+0 records in
 20+0 records out
 10240 bytes (102 MB) copied, 11.3959 seconds, 9.0 MB/s
 
 # iostat -xnz 1
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 2.8   17.3  149.4  127.6  0.0  1.30.0   66.0   0  12 c2t8d0
 2.8   17.3  149.4  127.6  0.0  1.30.0   65.9   0  13 c2t9d0
 2.8   17.3  149.3  127.6  0.0  1.30.0   66.1   0  13 c2t10d0
 2.8   17.3  149.3  127.6  0.0  1.30.0   66.4   0  13 c2t11d0
 2.8   17.3  149.5  127.6  0.0  1.30.0   66.5   0  13 c2t12d0
 0.31.05.4  133.9  0.0  0.00.1   27.2   0   1 c1t1d0
 0.50.3   26.8   16.5  0.0  0.00.1   11.1   0   0 c1t0d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.01.00.08.0  0.0  0.00.08.9   0   1 c1t1d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   10.00.07.0  0.0  0.00.00.5   0   0 c2t8d0
 0.0   10.00.07.5  0.0  0.00.00.5   0   1 c2t9d0
 0.0   10.00.06.0  0.0  0.00.00.7   0   1 c2t10d0
 0.0   10.00.07.0  0.0  0.00.00.3   0   0 c2t11d0
 0.0   10.00.07.5  0.0  0.00.00.3   0   0 c2t12d0


JCM then it jumps - roughly, quadrupling

 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   67.60.0 1298.6  0.0  9.80.2  145.2   1  71 c2t8d0
 0.0   64.80.0 1139.4  0.0  9.20.0  141.8   0  69 c2t9d0
 0.0   59.20.0  898.9  0.0  8.60.0  144.9   0  68 c2t10d0
 0.0   67.60.0 1379.4  0.0  9.50.0  140.0   0  68 c2t11d0
 0.0   70.40.0 1257.3  0.0 11.40.0  162.1   0  73 c2t12d0

JCM then it maxes out and stays that way

 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   43.80.0 3068.5  0.0 34.90.0  796.0   0 100 c2t8d0
 0.0   55.60.0 3891.9  0.0 34.70.0  624.9   0 100 c2t9d0
 0.0   58.80.0 4211.9  0.0 33.40.0  568.2   0 100 c2t10d0
 0.0   49.20.0 3388.6  0.0 34.50.0  702.3   0 100 c2t11d0
 0.0   57.70.0 3805.3  0.0 34.30.0  594.0   0 100 c2t12d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   60.00.0 4279.6  0.0 35.00.0  583.2   0 100 c2t8d0
 0.0   48.00.0 3423.7  0.0 35.00.0  729.1   0 100 c2t9d0
 0.0   41.00.0 2910.3  0.0 35.00.0  853.6   0 100 c2t10d0
 0.0   50.00.0 3552.2  0.0 35.00.0  699.9   0 100 c2t11d0
 0.0   48.00.0 3423.7  0.0 35.00.0  729.1   0 100 c2t12d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   48.00.0 3424.6  0.0 35.00.0  728.9   0 100 c2t8d0
 0.0   60.00.0 4280.8  0.0 35.00.0  583.1   0 100 c2t9d0
 0.0   55.00.0 3938.2  0.0 35.00.0  636.1   0 100 c2t10d0
 0.0   56.00.0 4024.3  0.0 35.00.0  624.7   0 100 c2t11d0
 0.0   48.00.0 3424.6  0.0 35.00.0  728.9   0 100 c2t12d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   52.00.0 3723.5  0.0 35.00.0  672.9   0 100 c2t8d0
 0.0   43.00.0 3081.5  0.0 35.00.0  813.8   0 100 c2t9d0
 0.0   46.00.0 3296.0  0.0 35.00.0  760.7   0 100 c2t10d0
 0.0   48.00.0 3424.0  0.0 35.00.0  729.0   0 100 c2t11d0
 0.0   62.00.0 4408.1  0.0 35.00.0  564.4   0 100 c2t12d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   60.00.0 4279.8  0.0 35.00.0  583.2   0 100 c2t8d0
 0.0   57.00.0 4065.8  0.0 35.00.0  613.9   0 100 c2t9d0
 0.0   59.00.0 4194.3  0.0 35.00.0  593.1   0 100 c2t10d0
 0.0   56.00.0 4023.3  0.0 35.00.0  624.9   0 100 c2t11d0
 0.0   48.00.0 3424.3  0.0 35.00.0  729.1   0 100 c2t12d0


JCM drops back a fraction

 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0   65.70.0 1385.0  0.0 14.50.0  220.8   0  90 c2t8d0
 0.9   68.4   39.8 1623.6  0.0 13.00.0  187.8   0  87 c2t9d0
 0.9   74.9   39.3 2054.6  0.0 16.70.0  219.6   0  94 c2t10d0
 0.9   70.3   39.3 1662.9  0.0 15.40.0  216.1   0  95 c2t11d0
 

[zfs-discuss] ZFS Roadmap - thoughts on expanding raidz / restriping / defrag

2007-12-17 Thread Ross
Hey folks,

Does anybody know if any of these are on the roadmap for ZFS, or have any idea 
how long it's likely to be before we see them (we're in no rush - late 2008 
would be fine with us, but it would be nice to know they're being worked on)?

I've seen many people ask for the ability to expand a raid-z pool by adding 
devices.  I'm wondering if it would be useful to work on a defrag / restriping 
tool to work hand in hand with this.

I'm assuming that when the functionality is available, adding a disk to a 
raid-z set will mean the existing data stays put, and new data is written 
across a wider stripe.  That's great for performance for new data, but not so 
good for the existing files.  Another problem is that you can't guarantee how 
much space will be added.  That will have to be calculated based on how much 
data you already have.

ie:  If you have a simple raid-z of five 500GB drives, you would expect adding 
another drive to add 500GB of space.  However, if your pool is half full, you 
can only make use of 250GB of space, the other 250GB is going to be wasted.

What I would propose to solve this is to implement a defrag / restripe utility 
as part of the raid-z upgrade process, making it a three step process:

 - New drive added to raid-z pool
 - Defrag tool begins restriping and defragmenting old data 
 - Once restripe complete, pool reports the additional free space

There are some limitations to this.  You would maybe want to advise that 
expanding a raid-z pool should only be done with a reasonable amount of free 
disk space, and that it may take some time.  It may also be beneficial to add 
the ability to add multiple disks in one go.

However, if it works it would seem to add several benefits:
 - Raid-z pools can be expanded
 - ZFS gains a defrag tool
 - ZFS gains a restriping tool
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread James C. McPherson
Robert Milkowski wrote:
 Hello James,
 
 Sunday, December 16, 2007, 9:54:18 PM, you wrote:
 
 JCM hi Frank,
 
 JCM there is an interesting pattern here (at least, to my
 JCM untrained eyes) - your %b starts off quite low:

 JCM All of which, to me, look like you're filling a buffer
 JCM or two.
 
 JCM I don't recall the config of your zpool, but if the
 JCM devices are disks that are direct or san-attached, I
 JCM would be wondering about their outstanding queue depths.
 
 JCM I think it's time to break out some D to find out where
 JCM in the stack the bottleneck(s) really are.
 Maybe he could try to limit # of queued request per disk in zfs to
 something smaller than default 35 (maybe even down to 1?)

Hi Robert,
yup, that's on my list of things for Frank to try. I've
asked for a bit more config information though so we can
get a bit of clarity on that front first.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Roadmap - thoughts on expanding raidz / restriping / defrag

2007-12-17 Thread Jeff Bonwick
In short, yes.  The enabling technology for all of this is something
we call bp rewrite -- that is, the ability to rewrite an existing
block pointer (bp) to a new location.  Since ZFS is COW, this would
be trivial in the absence of snapshots -- just touch all the data.
But because a block may appear in many snapshots, there's more to it.
It's not impossible, just a bit tricky... and we're working on it.

Once we have bp rewrite, many cool features will become available as
trivial applications of it: on-line defrag, restripe, recompress, etc.

Jeff

On Mon, Dec 17, 2007 at 02:29:14AM -0800, Ross wrote:
 Hey folks,
 
 Does anybody know if any of these are on the roadmap for ZFS, or have any 
 idea how long it's likely to be before we see them (we're in no rush - late 
 2008 would be fine with us, but it would be nice to know they're being worked 
 on)?
 
 I've seen many people ask for the ability to expand a raid-z pool by adding 
 devices.  I'm wondering if it would be useful to work on a defrag / 
 restriping tool to work hand in hand with this.
 
 I'm assuming that when the functionality is available, adding a disk to a 
 raid-z set will mean the existing data stays put, and new data is written 
 across a wider stripe.  That's great for performance for new data, but not so 
 good for the existing files.  Another problem is that you can't guarantee how 
 much space will be added.  That will have to be calculated based on how much 
 data you already have.
 
 ie:  If you have a simple raid-z of five 500GB drives, you would expect 
 adding another drive to add 500GB of space.  However, if your pool is half 
 full, you can only make use of 250GB of space, the other 250GB is going to be 
 wasted.
 
 What I would propose to solve this is to implement a defrag / restripe 
 utility as part of the raid-z upgrade process, making it a three step process:
 
  - New drive added to raid-z pool
  - Defrag tool begins restriping and defragmenting old data 
  - Once restripe complete, pool reports the additional free space
 
 There are some limitations to this.  You would maybe want to advise that 
 expanding a raid-z pool should only be done with a reasonable amount of free 
 disk space, and that it may take some time.  It may also be beneficial to add 
 the ability to add multiple disks in one go.
 
 However, if it works it would seem to add several benefits:
  - Raid-z pools can be expanded
  - ZFS gains a defrag tool
  - ZFS gains a restriping tool
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Roch - PAE


dd uses a default block size of 512B.  Does this map to your
expected usage ? When I quickly tested the CPU cost of small
read from cache, I did see that ZFS was more costly than UFS
up to a crossover between 8K and 16K.   We might need a more
comprehensive study of that (data in/out of cache, different
recordsize  alignment constraints   ).   But  for small
syscalls, I think we might need some work  in ZFS to make it
CPU efficient.

So first,  does  small sequential writeto a large  file,
matches an interesting use case ?


-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Roadmap - thoughts on expanding raidz / restriping / defrag

2007-12-17 Thread Robert Milkowski
Hello Jeff,

Monday, December 17, 2007, 10:42:18 AM, you wrote:

JB In short, yes.  The enabling technology for all of this is something
JB we call bp rewrite -- that is, the ability to rewrite an existing
JB block pointer (bp) to a new location.  Since ZFS is COW, this would
JB be trivial in the absence of snapshots -- just touch all the data.
JB But because a block may appear in many snapshots, there's more to it.
JB It's not impossible, just a bit tricky... and we're working on it.

JB Once we have bp rewrite, many cool features will become available as
JB trivial applications of it: on-line defrag, restripe, recompress, etc.



Cool.
Do you have some estimates on time frames? Last time it was said to be
late this year...

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Query related to ZFS Extended Attributes

2007-12-17 Thread sudarshan sridhar
I would like to know the method of getting/setting  Extended Attributes (EAs) 
for files and directoreis. 
  Also I would like to know is there any difference in getting/setting EAs from 
UFS filesystem ?. I mean can we use same system calls, open, openat etc... to 
extract EA information?
   
  If there arey new APIs related in extrcting EAs of ZFS file system please 
share.
  Also any header files or libraries needs to be included to the program that 
get/set EAs for ZFS.
   
  thanks  regards,
  sridhar.

   
-
Never miss a thing.   Make Yahoo your homepage.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Query related to ZFS Extended Attributes

2007-12-17 Thread Darren J Moffat
sudarshan sridhar wrote:
 I would like to know the method of getting/setting  Extended Attributes 
 (EAs) for files and directoreis.

 From an application layer openat(2).

 Also I would like to know is there any difference in getting/setting EAs 
 from UFS filesystem ?. I mean can we use same system calls, open, openat 
 etc... to extract EA information?

Correct same system calls.

 If there arey new APIs related in extrcting EAs of ZFS file system 
 please share.

Not that I am aware of.

 Also any header files or libraries needs to be included to the program 
 that get/set EAs for ZFS.

Only those indicated on the openat(2) man page.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Clearing partition/label info

2007-12-17 Thread Al Slater
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

What is the quickest way of clearing the label information on a disk
that has been previously used in a zpool?

regards

- --
Al Slater

Technical Director
SCL

Phone : +44 (0)1273 07
Fax   : +44 (0)1273 01
email : [EMAIL PROTECTED]

Stanton Consultancy Ltd
Pavilion House, 6-7 Old Steine, Brighton, East Sussex, BN1 1EJ
Registered in England Company number: 1957652 VAT number: GB 760 2433 55
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHZoluz4fTOFL/EDYRAnr5AJ4ie+xFNCi6gA5HLZ8IqI1wHItEEwCgj0ru
EwSc9B16io3kBz2wS0LGoEQ=
=eaZc
-END PGP SIGNATURE-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Frank Penczek
Hi,

On Dec 17, 2007 10:37 AM, Roch - PAE [EMAIL PROTECTED] wrote:


 dd uses a default block size of 512B.  Does this map to your
 expected usage ? When I quickly tested the CPU cost of small
 read from cache, I did see that ZFS was more costly than UFS
 up to a crossover between 8K and 16K.   We might need a more
 comprehensive study of that (data in/out of cache, different
 recordsize  alignment constraints   ).   But  for small
 syscalls, I think we might need some work  in ZFS to make it
 CPU efficient.

 So first,  does  small sequential writeto a large  file,
 matches an interesting use case ?

The pool holds home directories so small sequential writes to one
large file present one of a few interesting use cases.
The performance is equally disappointing for many (small) files
like compiling projects in svn repositories.

Cheers,
  Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing partition/label info

2007-12-17 Thread Victor Engle
Hi Al,

That depends on whether you want to go back to a VTOC/SMI label or
keep the EFI label created by ZFS. To keep the EFI label just
repartition and use the partitions as desired. If you want to go back
to a VTOC/SMI label you have to run format -e and then relabel the
disk and select SMI.

Be sure to run zpool destroy poolname before relabeling a lun used for zfs.

To automatically recreate the default VTOC label you could incorporate
the following into a script and iterate over a list of disks.

1. Create a label.dat file with the following line in it...

label  0  y

2. Then execute the following format command...

format -e  -m -f /tmp/label  cxtxdx

That should apply a default VTOC SMI label.

For x86 you may need run following before the format command...

usr/sbin/fdisk  -B cxtxdxp0

Regards,
Vic


On Dec 17, 2007 9:36 AM, Al Slater [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 What is the quickest way of clearing the label information on a disk
 that has been previously used in a zpool?

 regards

 - --
 Al Slater

 Technical Director
 SCL

 Phone : +44 (0)1273 07
 Fax   : +44 (0)1273 01
 email : [EMAIL PROTECTED]

 Stanton Consultancy Ltd
 Pavilion House, 6-7 Old Steine, Brighton, East Sussex, BN1 1EJ
 Registered in England Company number: 1957652 VAT number: GB 760 2433 55
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.7 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iD8DBQFHZoluz4fTOFL/EDYRAnr5AJ4ie+xFNCi6gA5HLZ8IqI1wHItEEwCgj0ru
 EwSc9B16io3kBz2wS0LGoEQ=
 =eaZc
 -END PGP SIGNATURE-

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Roch - PAE
Frank Penczek writes:
  Hi,
  
  On Dec 17, 2007 10:37 AM, Roch - PAE [EMAIL PROTECTED] wrote:
  
  
   dd uses a default block size of 512B.  Does this map to your
   expected usage ? When I quickly tested the CPU cost of small
   read from cache, I did see that ZFS was more costly than UFS
   up to a crossover between 8K and 16K.   We might need a more
   comprehensive study of that (data in/out of cache, different
   recordsize  alignment constraints   ).   But  for small
   syscalls, I think we might need some work  in ZFS to make it
   CPU efficient.
  
   So first,  does  small sequential writeto a large  file,
   matches an interesting use case ?
  
  The pool holds home directories so small sequential writes to one
  large file present one of a few interesting use cases.

Can you be more specific here ?

Do you have a body of application that would do small
sequential writes; or one in particular ? Another
interesting info is if we expect those to be allocating
writes or overwrite (beware that some app, move the old file 
out, then run allocating writes, then unlink the original
file).



  The performance is equally disappointing for many (small) files
  like compiling projects in svn repositories.
  

???

-r


  Cheers,
Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HA-NFS AND HA-ZFS

2007-12-17 Thread Matthew C Aycock
We are currently running sun cluster 3.2 on solaris 10u3. We are using ufs/vxvm 
4.1 as our shared file systems. However, I would like to migrate to HA-NFS on 
ZFS. Since there is no conversion process from UFS to ZFS other than copy, I 
would like to migrate on my own time. To do this I am planning to add a new 
zpool HAStoragePlus resource to my existing HA-NFS resource group. This way I 
can migrate data from my existing UFS to ZFS on my own time and the clients 
will not know the difference.

I made sure that the zpool was available on both nodes of the cluster. I then 
created a new HAStoragePlus resource for the zpool. I updated my NFS resource 
to depend on both HAStoragePlus resources. I added the two test file systems to 
the current dfstab.nfs-rs file.
I manually ran the shares and I was able to mount the new zfs file system. 
However, once the monitor ran it re-shared I guess and now the ZFS based 
filesystems are not available.

I read that you are not to add the ZFS based file systems to the 
FileSystemMountPoints property. Any ideas?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Rob Logan
  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  0.0   48.00.0 3424.6  0.0 35.00.0  728.9   0 100 c2t8d0

  That service time is just terrible!

yea, that service time is unreasonable. almost a second for each
command? and 35 more commands queued? (reorder = faster)

I had a server with similar service times, so I repaired
a replacement blade and when I went to slid it in, noticed a
loud noise coming from the blade below it.. notified the windows
person who owned it and it had been broken for some time
and turned it off... it was much better after that.

vibration... check vibration.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Frank Penczek
Hi,

On Dec 17, 2007 4:18 PM, Roch - PAE [EMAIL PROTECTED] wrote:
  
   The pool holds home directories so small sequential writes to one
   large file present one of a few interesting use cases.

 Can you be more specific here ?

 Do you have a body of application that would do small
 sequential writes; or one in particular ? Another
 interesting info is if we expect those to be allocating
 writes or overwrite (beware that some app, move the old file
 out, then run allocating writes, then unlink the original
 file).

Sorry, I try to be more specific.
The zpool contains home directories that are exported to client machines.
It is hard to predict what exactly users are doing, but one thing users do for
certain is checking out software projects from our subversion server. The
projects typically contain many source code files (thousands) and a
build process
accesses all of them in the worst case. That is what I meant by many (small)
files like compiling projects in my previous post. The performance
for this case
is ... hopefully improvable.

Now for sequential writes:
We don't have a specific application issuing sequential writes but I
can think of
at least a few cases where these writes may occur, e.g.
dumps of substantial amounts of measurement data or growing log files
of applications.
In either case these would be mainly allocating writes.

Does this provide the information you're interested in?


Cheers,
  Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-17 Thread Tom Buskey
Where do you get an 8 port SATA card that works with Solaris for around $100?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-17 Thread Will Murnane
http://www.wiredzone.com/xq/asp/ic.10016527/qx/itemdesc.htm

On Dec 17, 2007 2:01 PM, Tom Buskey [EMAIL PROTECTED] wrote:
 Where do you get an 8 port SATA card that works with Solaris for around $100?



 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing partition/label info

2007-12-17 Thread Nathan Kroenert
format -e

then from there, re-label using SMI label, versus EFI.

Cheers

Al Slater wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi,
 
 What is the quickest way of clearing the label information on a disk
 that has been previously used in a zpool?
 
 regards
 
 - --
 Al Slater
 
 Technical Director
 SCL
 
 Phone : +44 (0)1273 07
 Fax   : +44 (0)1273 01
 email : [EMAIL PROTECTED]
 
 Stanton Consultancy Ltd
 Pavilion House, 6-7 Old Steine, Brighton, East Sussex, BN1 1EJ
 Registered in England Company number: 1957652 VAT number: GB 760 2433 55
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.7 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
 iD8DBQFHZoluz4fTOFL/EDYRAnr5AJ4ie+xFNCi6gA5HLZ8IqI1wHItEEwCgj0ru
 EwSc9B16io3kBz2wS0LGoEQ=
 =eaZc
 -END PGP SIGNATURE-
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-17 Thread Jorgen Lundman


Shawn Ferry wrote:
 
 It is part of the shutdown process, you just need to stop crashing :)
 

That looks like a good idea on paper, but what other unforeseen 
side-effects will we get from not crashing?!


Apart from the one crash with quotacheck, it is currently running quite 
well. It updates the quotas as you would expect, I can create new 
accounts at any time and they appear for all clients. I have done 
multiple rsyncs from the NetApp to exercise the x4500 as much as 
possible, although each one takes 12 hours. Perhaps I should do more 
local  intensive tests as well.

At least there is one solution for us, now it is a matter of balancing 
the various numbers and come to a decision.

Lund

-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs error listings

2007-12-17 Thread asa
Hello all, looking to get the master list of all the error codes/ 
messages which I could get back from doing bad things in zfs.

I am wrappering the zfs command into python and want to be able to  
correctly pick up on errors which are returned from certain operations.

I did a source code search on opensolaris.org for the text of some of  
the errors I know about, with no luck.  Are these scattered about or  
is there some errors.c file I don't know about?

Thanks in advance.

Asa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs error listings

2007-12-17 Thread Eric Schrock
From a programming perspective, you can get the EZFS_* errno values from
libzfs.h.  However, these don't necessarily have a 1:1 correspondence
with error messages, which may be more informative, include more text,
etc.  If you search for callers of zfs_error() (which sets the 'action'
and errno when something fails) and zfs_error_aux() (which optionally
sets an extended 'reason' for when something fails) you'll get an idea
of all the error messages.

- Eric

On Mon, Dec 17, 2007 at 06:12:38PM -0800, asa wrote:
 Hello all, looking to get the master list of all the error codes/ 
 messages which I could get back from doing bad things in zfs.
 
 I am wrappering the zfs command into python and want to be able to  
 correctly pick up on errors which are returned from certain operations.
 
 I did a source code search on opensolaris.org for the text of some of  
 the errors I know about, with no luck.  Are these scattered about or  
 is there some errors.c file I don't know about?
 
 Thanks in advance.
 
 Asa
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-17 Thread Rob Windsor
Shawn Ferry wrote:

 It would be tempting to add the bootadm update-archive to the boot
 process, as I would rather have it come up half-assed, than not come  
 up at all.

 It is part of the shutdown process, you just need to stop crashing :)

I put a cron entry that does it manually every night.

It only took one crash after some FS work for me to come up with that 
solution.  :)

Rob++
-- 
|Internet: [EMAIL PROTECTED] __o
|Life: [EMAIL PROTECTED]_`\,_
|   (_)/ (_)
|They couldn't hit an elephant at this distance.
|  -- Major General John Sedgwick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /usr/bin and /usr/xpg4/bin differences

2007-12-17 Thread Rob Windsor
KASTURI VENKATA SESHA SASIDHAR wrote:
 Hello,
 I am working on open solaris bugs .. and need to change the code of 
 df in the above two folders..
 
 I would like to know why there are two df's with diff options in the 
 respective folders.. 
 /usr/bin/df is different is from /usr/xpg4/bin/df!!
 
 Why is it so?? What is this xpg4 represent?

man XPG4  ;)

Rob++
-- 
|Internet: [EMAIL PROTECTED] __o
|Life: [EMAIL PROTECTED]_`\,_
|   (_)/ (_)
|They couldn't hit an elephant at this distance.
|  -- Major General John Sedgwick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pools: S10 6/06 - SXDE 9/07

2007-12-17 Thread Richard Elling
Jay Calaus wrote:
 Hello,

 # cat /etc/release
Solaris 10 6/06 s10x_u2wos_09a X86
Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
   Use is subject to license terms.
  Assembled 09 June 2006

 I want to install Solaris Express Developer Edition 9/07.

 Will the SXDE 9/07 release recognize the zfs created by S10 6/06 
 without any complications or do I have to do something special to make 
 it work? 

It will just work fine. Enjoy!
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss