[zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross
I'm looking to create a new pool for storing CIFS files.  I know that I need to 
set casesensitivity=mixed, but appears I can only set this option when using 
the zfs create command, I get told it's not a valid pool property if I try to 
use it with zpool create.

Is there no way to create a pool such that the default filesystem will have 
casesensitivity=mixed?  I'd rather not have to remember to apply this setting 
for every filesystem I create within this pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Darren J Moffat
Ross wrote:
 I'm looking to create a new pool for storing CIFS files.  I know that I need 
 to set casesensitivity=mixed, but appears I can only set this option when 
 using the zfs create command, I get told it's not a valid pool property if 
 I try to use it with zpool create.
 
 Is there no way to create a pool such that the default filesystem will have 
 casesensitivity=mixed?  I'd rather not have to remember to apply this setting 
 for every filesystem I create within this pool.

# zpool create -O casesensitivity=mixed foo c1d1 c2d2
# zfs get casesensitivity foo
NAME  PROPERTY VALUESOURCE
foo   casesensitivity  mixed-

Note the use of -O rather than -o.  -O is for setting dataset properties 
on the top level dataset, -o is for setting pool properties.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Darren J Moffat
Ross wrote:
 Good god.  Talk about non intuitive.  Thanks Darren!

Why isn't that intuitive ?  It is even documented in the man page.

  zpool create [-fn] [-o property=value] ... [-O file-system-
  property=value] ... [-m mountpoint] [-R root] pool vdev ...


 Is it possible for me to suggest a quick change to the zpool error message in 
 solaris?  Should I file that as an RFE?  I'm just wondering if the error 
 message could be changed to something like:
 property 'casesensitivity' is not a valid pool property.  Did you mean to 
 use -O?
 
 It's just a simple change, but it makes it obvious that it can be done, 
 instead of giving the impression that it's not possible.

Feel free to log the RFE in defect.opensolaris.org.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] Cannot Mirror RPOOL, Can't Label Disk to SMI

2009-02-04 Thread Handojo
Dear Candy,

This is the log of zpool status, along with partition of c3d0 and c4d0

1. It appears to me that once I destroy the partition of c4d0 and recreate it 
again, I get different slices in c4d0. I forgot which fdisk partition I 
chose, it is either Solaris, Solaris2, or Unix System, and it gives the default 
location like that

I think I must manually create slice, but If I look at c3d0, which slice should 
I mirror ? root ? backup ?

2. And, for mirror to works, does it have to reside in the same cylinder in the 
other disk ?

3. For mirroring an IDE device ( since this SATA drive is translated into IDE, 
which is visible if we see in format ), Is it correct that it have to be 
slice that is being mirrored, and can't be the whole disk like what I did in 
Fedora 10, using software raid at the installation time ?

Thank you for everyone's guidance

===
hando...@opensolaris:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c3d0s0ONLINE   0 0 0

errors: No known data errors
hando...@opensolaris:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 DEFAULT cyl 60797 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
   1. c4d0 SATA2 cyl 60797 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
Specify disk (enter its number): 1
selecting c4d0
Controller working list found
[disk formatted, defect list found]


FORMAT MENU:
disk   - select a disk
type   - select (define) a disk type
partition  - select (define) a partition table
current- describe the current disk
format - format and analyze the disk
fdisk  - run the fdisk program
repair - repair a defective sector
show   - translate a disk address
label  - write label to the disk
analyze- surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save   - save new disk/partition definitions
volname- set 8-character volume name
!cmd - execute cmd, then return
quit
format partition


PARTITION MENU:
0  - change `0' partition
1  - change `1' partition
2  - change `2' partition
3  - change `3' partition
4  - change `4' partition
5  - change `5' partition
6  - change `6' partition
7  - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name   - name the current table
print  - display the current table
label  - write partition map and label to the disk
!cmd - execute cmd, then return
quit
partition print
Current partition table (original):
Total disk cylinders available: 60797 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0) 0
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 60796  465.73GB(60797/0/0) 976703805
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 alternateswm   1 - 2   15.69MB(2/0/0) 32130

format disk


AVAILABLE DISK SELECTIONS:
   0. c3d0 DEFAULT cyl 60797 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
   1. c4d0 SATA2 cyl 60797 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
Specify disk (enter its number)[1]: 0
selecting c3d0
NO Alt slice
No defect list found
[disk formatted, no defect list found]
/dev/dsk/c3d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
format partition


PARTITION MENU:
0  - change `0' partition
1  - change `1' partition
2  - change `2' partition
3  - change `3' partition
4  - change `4' partition
5  - change `5' partition
6  - change `6' partition
7  - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name   - name the current table
print  - display the current table
label  - write partition map and label to the disk
!cmd - execute cmd, then 

Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross Smith
It's not intuitive because when you know that -o sets options, an
error message saying that it's not a valid property makes you think
that it's not possible to do what you're trying.

Documented and intuitive are very different things.  I do appreciate
that the details are there in the manuals, but for items like this
where it's very easy to pick the wrong one, it helps if the commands
can work with you.

The difference between -o and -O is pretty subtle, I just think that
extra sentence in the error message could save a lot of frustration
when people get mixed up.

Ross



On Wed, Feb 4, 2009 at 11:14 AM, Darren J Moffat
darr...@opensolaris.org wrote:
 Ross wrote:

 Good god.  Talk about non intuitive.  Thanks Darren!

 Why isn't that intuitive ?  It is even documented in the man page.

 zpool create [-fn] [-o property=value] ... [-O file-system-
 property=value] ... [-m mountpoint] [-R root] pool vdev ...


 Is it possible for me to suggest a quick change to the zpool error message
 in solaris?  Should I file that as an RFE?  I'm just wondering if the error
 message could be changed to something like:
 property 'casesensitivity' is not a valid pool property.  Did you mean to
 use -O?

 It's just a simple change, but it makes it obvious that it can be done,
 instead of giving the impression that it's not possible.

 Feel free to log the RFE in defect.opensolaris.org.

 --
 Darren J Moffat

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Data loss bug - sidelined??

2009-02-04 Thread Ross
In August last year I posted this bug, a brief summary of which would be that 
ZFS still accepts writes to a faulted pool, causing data loss, and potentially 
silent data loss:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932

There have been no updates to the bug since September, and nobody seems to be 
assigned to it.  Can somebody let me know what's happening with this please.

And yes, this is definitely accepting writes that it shouldn't.  For over an 
hour I was able to carry out writes to a pool where *all* of its drives had 
been *physically removed* from the server.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Michael McKnight
Hello everyone,

I am trying to take ZFS snapshots (ie. zfs send) and burn them to DVD's for 
offsite storage.  In many cases, the snapshots greatly exceed the 8GB I can 
stuff onto a single DVD-DL.

In order to make this work, I have used the split utility to break the images 
into smaller, fixed-size chunks that will fit onto a DVD.  For example:

#split -b8100m ./mypictures.zfssnap  mypictures.zfssnap.split.

This gives me a set of files like this:

7.9G mypictures.zfssnap.split.aa
7.9G mypictures.zfssnap.split.ab
7.9G mypictures.zfssnap.split.ac
7.9G mypictures.zfssnap.split.ad
7.9G mypictures.zfssnap.split.ae
7.9G mypictures.zfssnap.split.af
6.1G mypictures.zfssnap.split.ag

I use the following command to convert them back into a single file:
#cat mypictures.zfssnap.split.a[a-g]  testjoin

But when I compare the checksum of the original snapshot to that of the 
rejoined snapshot, I get a different result:

#cksum 2008.12.31-2358--pictures.zfssnap
308335278   57499302592   mypictures.zfssnap
#cksum testjoin 
278036498   57499302592   testjoin

And when I try to restore the filesystem, I get the following failure:
#zfs recv pool_01/test  ./testjoin
cannot receive new filesystem stream: invalid stream (checksum mismatch)

Which makes sense given the different checksums reported by the cksum command 
above.

The question I have is, what can I do?  My guess is that there is some 
ascii/binary conversion issue that the split and cat commands are 
introducing into the restored file, but I'm at a loss as to exactly what is 
happening and how to get around it.

If anyone out there has a solution to my problem, or a better suggestion on how 
to accomplish the original goal, please let me know.

Thanks to all in advance for any help you may be able to offer.

-Michael
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread David Dyer-Bennet

On Wed, February 4, 2009 05:14, Darren J Moffat wrote:
 Ross wrote:
 Good god.  Talk about non intuitive.  Thanks Darren!

 Why isn't that intuitive ?  It is even documented in the man page.

   zpool create [-fn] [-o property=value] ... [-O file-system-
   property=value] ... [-m mountpoint] [-R root] pool vdev ...

Well, I hadn't previously thought about the fact that the zpool create
command creates both a pool *and a filesystem*.  So the fact that
casesensitivity is a filesystem property meant to me that it couldn't be
used on the zpool command.  I wouldn't have been looking at the man page
to find the -O switch, because it's obvious from the structure that it
won't be there -- zpool manipulates pools, zfs manipulates filesystems.

So I'm not completely sure non-intuitive is how I'd describe it; but
it's off in a weird corner I hadn't even noticed existed.

So I've learned something; I wonder if I'll remember it long enough to do
any good?

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a SDcard as zfs cache (L2ARC?) on a laptop?

2009-02-04 Thread Cindy . Swearingen
Jean-Paul,

Our goofy disk formatting is tripping you...

Put the disk space of c8t0d0 in c8t0d0s0 and try the
zpool add syntax again. If you need help with the
format syntax, let me know.

This command syntax should have complained:

pfexec zpool add rpool cache /dev/rdsk/c8t0d0

See the zpool syntax below for pointers.

Cindy

# zpool add rpool cache c0t1d0s0
# zpool iostat -v rpool
capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool   11.4G  22.4G  0  0  5.86K 73
   c0t0d0s0  11.4G  22.4G  0  0  5.86K 73
cache   -  -  -  -  -  -
   c0t1d0s0  47.8M  33.9G  0 53  18.1K  6.38M
--  -  -  -  -  -  -


Jean-Paul Rivet wrote:
Anyway, you could try simply creating standard
FDISK/Solaris/vtoc 
partitioning on the SD card, with all the free space
contained in one 
slice, and give that slice to ZFS.
 
 
 This is what I've done so far.
 
 fdisk - 
 
  Total disk size is 1943 cylinders
  Cylinder size is 4096 (512 byte) blocks
 
Cylinders
   Partition   StatusType  Start   End   Length%
   =   ==  =   ===   ==   ===
   1 Solaris2  1  19421942100
 
 partition p
 Volume:  cache
 Current partition table (original):
 Total disk cylinders available: 1940 + 2 (reserved cylinders)
 
 Part  TagFlag CylindersSizeBlocks
   0 unassignedwm   0   0 (0/0/0)  0
   1 unassignedwm   0   0 (0/0/0)  0
   2 unassignedwu   1 - 19393.79GB(1939/0/0) 7942144
   3 unassignedwm   0   0 (0/0/0)  0
   4 unassignedwm   0   0 (0/0/0)  0
   5 unassignedwm   0   0 (0/0/0)  0
   6 unassignedwm   0   0 (0/0/0)  0
   7 unassignedwm   0   0 (0/0/0)  0
   8   bootwu   0 -02.00MB(1/0/0)   4096
   9 unassignedwm   0   0 (0/0/0)  0
 
 partition 
 
 I then tried numerous variations of the command and kept getting errors like 
 this one:
 
 $ pfexec zpool add rpool cache /dev/dsk/c8t0d0s2
 cannot add to 'rpool': invalid argument for this pool operation
 $
 
 Eventually I tried this one:
 
 $ pfexec zpool add rpool cache /dev/rdsk/c8t0d0
 
 I think its working but aren't sure because it still hasn't finished after 
 1hr15. The HD seems to be getting hit fairly hard and iostat shows:
 
 zpool iostat -v rpool
capacity operationsbandwidth
 pool used  avail   read  write   read  write
 --  -  -  -  -  -  -
 rpool   49.1G  14.4G 11 56   747K  3.88M
   c4t0d0s0  49.1G  14.4G 11 56   747K  3.88M
 --  -  -  -  -  -  -
 
 I'll let it run through the night and see what happens.
 
 Cheers, JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to casesensitivity=mixed

2009-02-04 Thread Ross
Well, after a quick test today I can confirm that this isn't possible.

You can do a send/receive to an existing filesystem, but you need to use the -F 
option, and it overwrites the receiving filesystem, giving it identical 
properties to the source.

Looks like this'll have to be a proper backup / restore job between separate 
filesystems.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] Cannot Mirror RPOOL, Can't Label Disk to SMI

2009-02-04 Thread Cindy . Swearingen
Handojo,

Use the format utility to put the disk space of c4d0 into c4d0s0
and try the zpool attach syntax again, like this:

# zpool attach rpool c3d0s0 c4d0s0

Let the newly added disk resilver by monitoring with zpool status.
Then, install the bootblocks on the newly added disk, like this:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4d0s0

Cindy

Handojo wrote:
 Dear Candy,
 
 This is the log of zpool status, along with partition of c3d0 and c4d0
 
 1. It appears to me that once I destroy the partition of c4d0 and recreate it 
 again, I get different slices in c4d0. I forgot which fdisk partition I 
 chose, it is either Solaris, Solaris2, or Unix System, and it gives the 
 default location like that
 
 I think I must manually create slice, but If I look at c3d0, which slice 
 should I mirror ? root ? backup ?
 
 2. And, for mirror to works, does it have to reside in the same cylinder in 
 the other disk ?
 
 3. For mirroring an IDE device ( since this SATA drive is translated into 
 IDE, which is visible if we see in format ), Is it correct that it have to 
 be slice that is being mirrored, and can't be the whole disk like what I did 
 in Fedora 10, using software raid at the installation time ?
 
 Thank you for everyone's guidance
 
 ===
 hando...@opensolaris:~# zpool status
   pool: rpool
  state: ONLINE
  scrub: none requested
 config:
 
   NAMESTATE READ WRITE CKSUM
   rpool   ONLINE   0 0 0
 c3d0s0ONLINE   0 0 0
 
 errors: No known data errors
 hando...@opensolaris:~# format
 Searching for disks...done
 
 
 AVAILABLE DISK SELECTIONS:
0. c3d0 DEFAULT cyl 60797 alt 2 hd 255 sec 63
   /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
1. c4d0 SATA2 cyl 60797 alt 2 hd 255 sec 63
   /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
 Specify disk (enter its number): 1
 selecting c4d0
 Controller working list found
 [disk formatted, defect list found]
 
 
 FORMAT MENU:
 disk   - select a disk
 type   - select (define) a disk type
 partition  - select (define) a partition table
 current- describe the current disk
 format - format and analyze the disk
 fdisk  - run the fdisk program
 repair - repair a defective sector
 show   - translate a disk address
 label  - write label to the disk
 analyze- surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 save   - save new disk/partition definitions
 volname- set 8-character volume name
 !cmd - execute cmd, then return
 quit
 format partition
 
 
 PARTITION MENU:
 0  - change `0' partition
 1  - change `1' partition
 2  - change `2' partition
 3  - change `3' partition
 4  - change `4' partition
 5  - change `5' partition
 6  - change `6' partition
 7  - change `7' partition
 select - select a predefined table
 modify - modify a predefined partition table
 name   - name the current table
 print  - display the current table
 label  - write partition map and label to the disk
 !cmd - execute cmd, then return
 quit
 partition print
 Current partition table (original):
 Total disk cylinders available: 60797 + 2 (reserved cylinders)
 
 Part  TagFlag Cylinders SizeBlocks
   0 unassignedwm   00 (0/0/0) 0
   1 unassignedwm   00 (0/0/0) 0
   2 backupwu   0 - 60796  465.73GB(60797/0/0) 976703805
   3 unassignedwm   00 (0/0/0) 0
   4 unassignedwm   00 (0/0/0) 0
   5 unassignedwm   00 (0/0/0) 0
   6 unassignedwm   00 (0/0/0) 0
   7 unassignedwm   00 (0/0/0) 0
   8   bootwu   0 - 07.84MB(1/0/0) 16065
   9 alternateswm   1 - 2   15.69MB(2/0/0) 32130
 
 format disk
 
 
 AVAILABLE DISK SELECTIONS:
0. c3d0 DEFAULT cyl 60797 alt 2 hd 255 sec 63
   /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
1. c4d0 SATA2 cyl 60797 alt 2 hd 255 sec 63
   /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
 Specify disk (enter its number)[1]: 0
 selecting c3d0
 NO Alt slice
 No defect list found
 [disk formatted, no defect list found]
 /dev/dsk/c3d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
 format partition
 
 
 PARTITION MENU:
 0  - change `0' partition
 1  - change `1' 

Re: [zfs-discuss] ZFS snapshot splitting joinin

2009-02-04 Thread Fajar A. Nugraha
On Wed, Feb 4, 2009 at 6:19 PM, Michael McKnight
michael_mcknigh...@yahoo.com wrote:
 #split -b8100m ./mypictures.zfssnap  mypictures.zfssnap.split.
 But when I compare the checksum of the original snapshot to that of the 
 rejoined snapshot, I get a different result:

 #cksum 2008.12.31-2358--pictures.zfssnap
 308335278   57499302592   mypictures.zfssnap
 #cksum testjoin
 278036498   57499302592   testjoin

 The question I have is, what can I do?  My guess is that there is some 
 ascii/binary conversion issue that the split and cat commands are 
 introducing into the restored file, but I'm at a loss as to exactly what is 
 happening and how to get around it.

Here's a workaround : try using something that can split-merge
reliably. 7z should work, and it's already included on opensolaris.

Something like

7z a -v8100m -ttar mypictures.zfssnap.7z mypictures.zfssnap

should be very fast (using no compression), while

7z a -v8100m mypictures.zfssnap.7z mypictures.zfssnap

should save a lot of space using the default 7z compression algorithm.

Regards,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross
Good god.  Talk about non intuitive.  Thanks Darren!

Is it possible for me to suggest a quick change to the zpool error message in 
solaris?  Should I file that as an RFE?  I'm just wondering if the error 
message could be changed to something like:
property 'casesensitivity' is not a valid pool property.  Did you mean to use 
-O?

It's just a simple change, but it makes it obvious that it can be done, instead 
of giving the impression that it's not possible.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Should I report this as a bug?

2009-02-04 Thread Kyle McDonald
I jumpstarted my machine with sNV b106, and installed with ZFS root/boot.
It left me at a shell prompt in the JumpStart environment, with my ZFS 
root on /a.

I wanted to try out some things that I planned on scripting for the 
JumpStart to run, one of these waas creating a new ZFS pool from the 
remaining disks. I looked at the zpool create manpage, and saw this it 
had a -R altroot option, and the exact same thing had just worked for 
me with 'dladm aggr-create' so I thought I'd give that a try.

If the machine had been booted normally, my ZFS root would have been /, 
and a 'zpool create zdata0 ...' would have defaulted to mounting the new 
pool as /zdata0 right next to my ZFS root pool /zroot0. So I expected 
'zpool create -R /a zdata0 ...' to set the default mountpoint for the 
pool to /zdata0 with a temporary altroot=/a.

I gave it a try, and while it created the pool it failed to mount it at 
all. It reported that /a wasn't empty.

'zpool list', and 'zpool get all' show the altroot=/a. But 'zfs  get 
all  zdata0' shows the mountpoint=/a also, not the default of /zdata0.

Am I expecting the wrong thing here? or is this a bug?

 -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a SDcard as zfs cache (L2ARC?) on a laptop?

2009-02-04 Thread Jean-Paul Rivet
 Anyway, you could try simply creating standard
 FDISK/Solaris/vtoc 
 partitioning on the SD card, with all the free space
 contained in one 
 slice, and give that slice to ZFS.

This is what I've done so far.

fdisk - 

 Total disk size is 1943 cylinders
 Cylinder size is 4096 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 Solaris2  1  19421942100

partition p
Volume:  cache
Current partition table (original):
Total disk cylinders available: 1940 + 2 (reserved cylinders)

Part  TagFlag CylindersSizeBlocks
  0 unassignedwm   0   0 (0/0/0)  0
  1 unassignedwm   0   0 (0/0/0)  0
  2 unassignedwu   1 - 19393.79GB(1939/0/0) 7942144
  3 unassignedwm   0   0 (0/0/0)  0
  4 unassignedwm   0   0 (0/0/0)  0
  5 unassignedwm   0   0 (0/0/0)  0
  6 unassignedwm   0   0 (0/0/0)  0
  7 unassignedwm   0   0 (0/0/0)  0
  8   bootwu   0 -02.00MB(1/0/0)   4096
  9 unassignedwm   0   0 (0/0/0)  0

partition 

I then tried numerous variations of the command and kept getting errors like 
this one:

$ pfexec zpool add rpool cache /dev/dsk/c8t0d0s2
cannot add to 'rpool': invalid argument for this pool operation
$

Eventually I tried this one:

$ pfexec zpool add rpool cache /dev/rdsk/c8t0d0

I think its working but aren't sure because it still hasn't finished after 
1hr15. The HD seems to be getting hit fairly hard and iostat shows:

zpool iostat -v rpool
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool   49.1G  14.4G 11 56   747K  3.88M
  c4t0d0s0  49.1G  14.4G 11 56   747K  3.88M
--  -  -  -  -  -  -

I'll let it run through the night and see what happens.

Cheers, JP
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Send Receive (and why does 'ls' modify a snapshot?)

2009-02-04 Thread Tony Galway
I am trying to keep a file system (actually quite a few) in sync across two 
systems for DR purposes, but I am encountering something that I find strange. 
Maybe its not strange, and I just don't understand - but I will pose to you 
fine people to help answer my question. This is all scripted, but I have pulled 
out the relevant commands for your reference:

I have a file system on localnode and I create a snapshot call NMINUS1 which 
I then send to my remotenode. 

zfs snapshot GT/t...@nminus1
zfs send GT/t...@nminus1 | ssh remotenode zfs receive GT/t...@nminus1

This works properly, I then get the mountpoint property of the filesystem on 
the localnode and using that, set it to the same on the remote node:

$mp = zfs get -Ho value mountpoint GT/test
ssh remotenode zfs set mountpoint=$mp GT/test

Again, works fine and this completes my inital setup.

From that point onwards, I want to send an incremental snapshot on a, say, 
nightly basis. So I create a new snapshot (NSNAP), send that across, and then 
remove the old snap and rename the new to NMINUS1 ... so

zfs snapshot GT/t...@nsnap
zfs send -i NMINUS1 GT/t...@nsnap | ssh remotenode zfs receive GT/test

-- On both nodes
zfs destroy GT/t...@nminus1
zfs rename GT/t...@nsnap GT/t...@nminus1

Now everything works fine unless I perform a simple 'ls' on the filesystem on 
the remote node. On the local node I can modify the contents of GT/test at any 
time, add or remove files, etc. and when I send the incremental snapshot to the 
remote node, it completes properly, and I can do this as many times as I want, 
but as soon as I issue that 

# ls /GT/test

on the remote node, the next time I try to send an incremental snapshot I get 
the following error:

# zfs send -i NMINUS1 GT/t...@nsnap | ssh remotenode zfs receive GT/test

cannot receive incremental stream: destination GT/ahg has been modified
since most recent snapshot

Other than modifying possibly access time - what has been change in the 
snapshot that causes this problem?? One item of note (or not) one system is 
SPARC one is AMD based.

Thanks for any ideas.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross
Yeah, I knew that zpool creates a root filesystem (since it's listed in zfs 
list), but I also knew these properties had to be set on creation, not after, 
so I figured zpool -o was the way to do it.

Completely threw me when zpool -o said it wasn't a valid property, I'd never 
have thought to look for a different command to -o.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Send Receive (and why does 'ls' modify a snapshot?)

2009-02-04 Thread Greg Mason
Tony,

I believe you want to use zfs recv -F to force a rollback on the 
receiving side.

I'm wondering if your ls is updating the atime somewhere, which would 
indeed be a change...

-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Brad
If you have an older Solaris release using ZFS and Samba, and you upgrade to a 
version with CIFS support, how do you ensure the file systems/pools have 
casesensitivity mixed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Toby Thain

On 4-Feb-09, at 6:19 AM, Michael McKnight wrote:

 Hello everyone,

 I am trying to take ZFS snapshots (ie. zfs send) and burn them to  
 DVD's for offsite storage.  In many cases, the snapshots greatly  
 exceed the 8GB I can stuff onto a single DVD-DL.

 In order to make this work, I have used the split utility ...
 I use the following command to convert them back into a single file:
 #cat mypictures.zfssnap.split.a[a-g]  testjoin

 But when I compare the checksum of the original snapshot to that of  
 the rejoined snapshot, I get a different result:

Tested your RAM lately?

--Toby



 -Michael
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Send Receive (and why does 'ls' modify a snapshot?)

2009-02-04 Thread Andrew Gabriel
Greg Mason wrote:
 Tony,
 
 I believe you want to use zfs recv -F to force a rollback on the 
 receiving side.
 
 I'm wondering if your ls is updating the atime somewhere, which would 
 indeed be a change...

Yes.

If you want to have a look around it, cd into the last snapshot and look 
around in there instead, where you can't modify anything.

-- 
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross
You can check whether it's set with:
$ zfs get casesensitivity pool/filesystem

If you're using CIFS, you need that to return mixed or insensitive.  If it 
returns sensitive, it will cause you problems.

Unfortunately there's no way to change this setting on an existing filesystem, 
so if you do have the wrong setting you'll need to create a new filesystem and 
move your files over.  If you have enough disk space it's relatively easy to do 
with CIFS:

- Create a new filesystem on the server using zfs create -o 
casesensivity=mixed 
- Share it under a new name: zfs set sharesmb=name= pool/filesystem
- Move all the files (I did a cut/paste with windows explorer since this 
preserves permissions, robocopy would probably work too, mv in solaris might 
but I've not tried that)
- Delete the original filesystem and change the name the new one is shared 
under with zfs set sharesmb=name=
- Rename the new filesystem if needed with zfs rename
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Send Receive (and why does 'ls' modify a snapshot?)

2009-02-04 Thread Tony Galway
Thanks ... the -F works perfectly, and provides a further benefit in that the 
client can mess with the file system as much as they want for testing purposes, 
but when it comes time to ensure it is synchronized each night, it will revert 
back to the previous state.

Thanks
-Tony
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Sharing over smb - permissions/access

2009-02-04 Thread Aaron
Hello, 
I have setup a fileserver using zfs and am able to see the share from my mac.  
I am able to create/write to the share as well as read.  I've ensured that I 
have the same user and uid on both the server (opensolaris snv101b) as well as 
the mac.  The root folder of the share is owned by root:mygroup.

When I write/create new files on the share, they get created, but the 
permissions are such:

-- 1 aaron staff14453522 2009-01-03 15:41 myFile.txt

aaron:staff is the user on the mac (leopard).  When I mounted the share, I used 
aaron as the user.

What I really would like is for this share to have newly written/created files 
be owned by root:mygroup.

I attempted setting ACLs by various options that I used did not yield the 
correct result (even attempting to have everyone with full access).

Is there something simple I'm missing (I hope)?

Thanks!  If there is any other information that I am not providing, please let 
me know.  I do not have an smb.conf on the server that  can see - all sharing 
is done through zfs sharesmb.

Aaron
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Frank Cusack
On February 4, 2009 8:39:13 AM -0800 Ross myxi...@googlemail.com wrote:
 Yeah, I knew that zpool creates a root filesystem (since it's listed in
 zfs list), but I also knew these properties had to be set on creation,
 not after, so I figured zpool -o was the way to do it.

sorry, which properties have to be set on creation and not after?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send over sbmfs results in invalid record type on zfs receive

2009-02-04 Thread Bob Friesenhahn
On Wed, 4 Feb 2009, Fajar A. Nugraha wrote:

 On Wed, Feb 4, 2009 at 1:28 AM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 On Tue, 3 Feb 2009, Fajar A. Nugraha wrote:

 Just wondering, why didn't you compress it first? something like

 zfs send | gzip  backup.zfs.gz


 The 'lzop' compressor is much better for this since it is *much* faster.

 Sure, when enabling compression on zfs fs.

Note that in the user's command line, the 'gzip' is changed to 'lzop'. 
It has nothing to do with ZFS.  Of course change the file extention to 
something like .lzo.  If the 'lzop' utility is not available, then it 
is necessary to install it.

In my testing I found that lzop was *much* faster than gzip and the 
result was maybe 15% less compressed.  It makes a huge difference when 
you are talking about a lot of data.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Darren J Moffat
Frank Cusack wrote:
 On February 4, 2009 8:39:13 AM -0800 Ross myxi...@googlemail.com wrote:
 Yeah, I knew that zpool creates a root filesystem (since it's listed in
 zfs list), but I also knew these properties had to be set on creation,
 not after, so I figured zpool -o was the way to do it.
 
 sorry, which properties have to be set on creation and not after?

Those in the out put of 'zfs get' that are marked as not editable but 
are marked as inheritable are the set once property set.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Miles Nordin
 mm == Michael McKnight michael_mcknigh...@yahoo.com writes:

mm #split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
mm #cat mypictures.zfssnap.split.a[a-g]  testjoin

mm But when I compare the checksum of the original snapshot to
mm that of the rejoined snapshot, I get a different result:

sounds fine.  I'm not sure why it's failing.

mm And when I try to restore the filesystem, I get the following
mm failure: #zfs recv pool_01/test  ./testjoin cannot receive
mm new filesystem stream: invalid stream (checksum mismatch)

however, aside from this problem you're immediately having, I think
you should never archive the output of 'zfs send'.  I think the
current warning on the wiki is not sufficiently drastic, but when I
asked for an account to update the wiki I got no answer.  Here are the
problems, again, with archiving 'zfs send' output:

 * no way to test the stream's integrity without receiving it.
   (meaning, to test a stream, you need enough space to store the
   stream being tested, plus that much space again.  not practical.)
   A test could possibly be hacked up, but because the whole ZFS
   software stack is involved in receiving, and is full of assertions
   itself, any test short of actual extraction wouldn't be a thorough
   test, so this is unlikely to change soon.

 * stream format is not guaranteed to be forward compatible with new
   kernels.  and versioning may be pickier than zfs/zpool versions.

 * stream is expanded _by the kernel_, so even if tar had a
   forward-compatibility problem, which it won't, you could
   hypothetically work around it by getting an old 'tar'.  For 'zfs
   send' streams you have to get an entire old kernel, and boot it on
   modern hardware, to get at your old stream.

 * supposed to be endian-independent, but isn't.

 * stream is ``protected'' from corruption in the following way: if a
   single bit is flipped anywhere in the stream, the entire stream and
   all incrementals descended from it become worthless.  It is
   EXTREMELY corruption-sensitive.  'tar' and zpool images both
   detect, report, work around, flipped bits.  The 'zfs send' idea is
   different: if there's corruption, the designers assume you can just
   restart the 'zfs send | zfs recv' until you get a clean go---what
   you most need is ability to atomically roll back the failed recv,
   which you do get.  You are not supposed to be archiving it!

 * unresolved bugs.  ``poisonous streams'' causing kernel panics when
   you receive them, 
http://www.opensolaris.org/jive/thread.jspa?threadID=81613tstart=0

The following things do not have these problems:

 * ZFS filesystems inside file vdev's (except maybe the endian
   problem.  and also the needs-whole-kernel problem, but mitigated by
   better forward-compatibility guarantees.)

 * tar files

In both alternatives you probably shouldn't use gzip on the resulting
file.  If you must gzip, it would be better to make a bunch of tar.gz
files, ex., one per user, and tar the result.  Maybe I'm missing some
magic flag, but I've not gotten gzip to be too bitflip-resilient.

The wiki cop-out is a nebulous ``enterprise backup ``Solution' ''.
Short of that you might make a zpool in a file with zfs compression
turned on and rsync or cpio or zfs send | zfs recv the data into it.

Or just use gtar like in the old days.  With some care you may even be
able to convince tar to write directly to the medium.  And when you're
done you can do a 'tar t' directly from medium also, to check it.  I'm
not sure what to do about incrementals.  There is a sort of halfass
incremental feature in gtar, but not like what ZFS gives.


pgpwUYXwCkuVI.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Sharing over smb - permissions/access

2009-02-04 Thread Robert Thurlow
Aaron wrote:

 I have setup a fileserver using zfs and am able to see the share from my mac.
  I am able to create/write to the share as well as read.  I've ensured 
that I
  have the same user and uid on both the server (opensolaris snv101b) 
as well
  as the mac.  The root folder of the share is owned by root:mygroup.
 
 When I write/create new files on the share, they get created, but the 
 permissions
  are such:
 
 -- 1 aaron staff14453522 2009-01-03 15:41 myFile.txt
 
 aaron:staff is the user on the mac (leopard).  When I mounted the share, I
  used aaron as the user.
 
 What I really would like is for this share to have newly written/created
  files be owned by root:mygroup.

This is a limitation on the part of the Mac 'smbfs' client.  We
use the same code base on Solaris.  The code is not currently
able to do I/O on the part of multiple users; the user who was
authenticated at mount time owns all of the files.  Also, the
Mac client does not try to parse ACLs at all, so permission
mode bits and reported ownership are guesses done at the client.
If you set ACLs on the server, they should be enforced, even if
you can't see that from the Mac.  If the ownership or perm bits
are in your way on the Mac, you can use smbfs mount options to
get it to tell you different things, but there's no way to see
the truth until more code is written.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread David Dyer-Bennet

On Wed, February 4, 2009 11:05, Ross wrote:
 You can check whether it's set with:
 $ zfs get casesensitivity pool/filesystem

 If you're using CIFS, you need that to return mixed or insensitive.
 If it returns sensitive, it will cause you problems.

It will?  What symptoms?

 Unfortunately there's no way to change this setting on an existing
 filesystem, so if you do have the wrong setting you'll need to create a
 new filesystem and move your files over.  If you have enough disk space
 it's relatively easy to do with CIFS:

 - Create a new filesystem on the server using zfs create -o
 casesensivity=mixed 
 - Share it under a new name: zfs set sharesmb=name= pool/filesystem
 - Move all the files (I did a cut/paste with windows explorer since this
 preserves permissions, robocopy would probably work too, mv in solaris
 might but I've not tried that)

Doing it over the network is going to be disastrously slow.

I'd say cp -a is the way to go, but I'm guessing that's not supported
under Solaris (don't have my system handy right now).

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread David Dyer-Bennet

On Wed, February 4, 2009 12:01, Miles Nordin wrote:

  * stream format is not guaranteed to be forward compatible with new
kernels.  and versioning may be pickier than zfs/zpool versions.

Useful points, all of them.  This particular one also points out something
I hadn't previously thought about -- using zfs send piped through ssh (or
in some other way going from one system to another) is also sensitive to
this versioning issue.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross
Well for one, occasional 'file not found' errors when programs (or shortcuts) 
check whether a file exists if the case is wrong.

And you can expect more problems too.  Windows systems are case insensitive, so 
there's nothing stopping a program referring to one of it's files as file, 
File and FILE.  They'll all refer to the same thing.  Under a case 
sensitive system however, those are three separate files, and there's no way to 
know what kinds of problems that could cause.

Suffice to say that while you might get away with running ZFS and CIFS in case 
sensitive mode for a bit, sooner or later it's going to go horribly wrong.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Miles Nordin
 r == Ross  myxi...@googlemail.com writes:

 r Suffice to say that while you might get away with running ZFS
 r and CIFS in case sensitive mode for a bit, sooner or later
 r it's going to go horribly wrong.

It will work okay with Samba, though.


pgpcMQSPMQMx5.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Bob Friesenhahn
On Wed, 4 Feb 2009, Toby Thain wrote:
 In order to make this work, I have used the split utility ...
 I use the following command to convert them back into a single file:
 #cat mypictures.zfssnap.split.a[a-g]  testjoin

 But when I compare the checksum of the original snapshot to that of
 the rejoined snapshot, I get a different result:

 Tested your RAM lately?

Split is originally designed to handle text files.  It may have 
problems with binary files.  Due to these issues, long ago (1993) I 
wrote a 'bustup' utility which works on binary files.  I have not 
looked at it since then.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Sharing over smb - permissions/access

2009-02-04 Thread Aaron
Ok, thanks for the info - I was really puling my hair out over this.  Would you 
know if sharing over nfs via zfs would fare any better?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Sharing over smb - permissions/access

2009-02-04 Thread Robert Thurlow
Aaron wrote:
 Ok, thanks for the info - I was really puling my hair out over this.  Would 
 you know if sharing over nfs via zfs would fare any better?

I am *quite* happy with the Mac NFS client, and use it against ZFS
files all the time.  It's worth the time to make sure you're using
the same numeric UIDs on both the client and server if you're using
the default authentication settings.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Toby Thain

On 4-Feb-09, at 2:29 PM, Bob Friesenhahn wrote:

 On Wed, 4 Feb 2009, Toby Thain wrote:
 In order to make this work, I have used the split utility ...
 I use the following command to convert them back into a single file:
 #cat mypictures.zfssnap.split.a[a-g]  testjoin

 But when I compare the checksum of the original snapshot to that of
 the rejoined snapshot, I get a different result:

 Tested your RAM lately?

 Split is originally designed to handle text files.  It may have  
 problems with binary files.

Ouch, OK.

--Toby

   Due to these issues, long ago (1993) I wrote a 'bustup' utility  
 which works on binary files.  I have not looked at it since then.

 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/ 
 bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Sharing over smb - permissions/access

2009-02-04 Thread Aaron
Just tired sharing over nfs and [i]much[/i] improved experience.  I'll wait and 
see if performance becomes an issue, but this looks prety good now.  The key 
being as you mentioned, keeping the uid/gids in sync.

Thanks!

Aaron
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a SDcard as zfs cache (L2ARC?) on a laptop?

2009-02-04 Thread Jean-Paul Rivet
 Put the disk space of c8t0d0 in c8t0d0s0 and try the
 zpool add syntax again. If you need help with the
 format syntax, let me know.
 
 This command syntax should have complained:
 
 pfexec zpool add rpool cache /dev/rdsk/c8t0d0
 
 See the zpool syntax below for pointers.
 
 Cindy

I've tried the following and get:

~$ pfexec zpool add rpool cache c8t0d0s0
cannot open '/dev/dsk/c8t0d0s0': No such device or address 
~$ 

Expected because s0 is defined as 0 bytes in the partition table I presume?

~$ pfexec zpool add rpool cache c8t0d0s2
cannot add to 'rpool': invalid argument for this pool operation  
~$ 

Should this have worked?

Have I partitioned the SD card correctly and am I trying to allocate the 
correct slice to zfs?

Thanks for your responses far.
 
partition p
Volume:  cache
Current partition table (original):
Total disk cylinders available: 1940 + 2 (reserved cylinders)

Part  TagFlag CylindersSizeBlocks
  0 unassignedwm   0   0 (0/0/0)  0
  1 unassignedwm   0   0 (0/0/0)  0
  2 unassignedwu   1 - 19393.79GB(1939/0/0) 7942144
  3 unassignedwm   0   0 (0/0/0)  0
  4 unassignedwm   0   0 (0/0/0)  0
  5 unassignedwm   0   0 (0/0/0)  0
  6 unassignedwm   0   0 (0/0/0)  0
  7 unassignedwm   0   0 (0/0/0)  0
  8   bootwu   0 -02.00MB(1/0/0)   4096
  9 unassignedwm   0   0 (0/0/0)  0

partition 

Cheers, JP

 # zpool add rpool cache c0t1d0s0
 # zpool iostat -v rpool
 capacity operationsbandwidth
 d  avail   read  write   read  write
 --  -  -  -  -  -  -
 rpool   11.4G  22.4G  0  0  5.86K 73
c0t0d0s0  11.4G  22.4G  0  0  5.86K 73
 he   -  -  -  -  -  -
c0t1d0s0  47.8M  33.9G  0 53  18.1K  6.38M
 ---  -  -  -  -  -  -

-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a SDcard as zfs cache (L2ARC?) on a laptop?

2009-02-04 Thread Cindy . Swearingen
Jean-Paul,

Regarding your comments here:

Expected because s0 is defined as 0 bytes in the partition table I presume?

Yes, you need to put the disk space into s0 by using the format
utility. Use the modify option from format's partition menu is
probably the easiest way. Email me directly if you need the
steps.

 
  ~$ pfexec zpool add rpool cache c8t0d0s2
  cannot add to 'rpool': invalid argument for this pool operation 

  ~$
 
  Should this have worked?

No, because s2 has a special meaning with a VTOC (SMI) label.

After you fix the disk partitions, everything should just work. :-)

Cindy


Jean-Paul Rivet wrote:
Put the disk space of c8t0d0 in c8t0d0s0 and try the
zpool add syntax again. If you need help with the
format syntax, let me know.

This command syntax should have complained:

pfexec zpool add rpool cache /dev/rdsk/c8t0d0

See the zpool syntax below for pointers.

Cindy
 
 
 I've tried the following and get:
 
 ~$ pfexec zpool add rpool cache c8t0d0s0
 cannot open '/dev/dsk/c8t0d0s0': No such device or address 
 ~$ 
 
 Expected because s0 is defined as 0 bytes in the partition table I presume?
 
 ~$ pfexec zpool add rpool cache c8t0d0s2
 cannot add to 'rpool': invalid argument for this pool operation  
 ~$ 
 
 Should this have worked?
 
 Have I partitioned the SD card correctly and am I trying to allocate the 
 correct slice to zfs?
 
 Thanks for your responses far.
  
 partition p
 Volume:  cache
 Current partition table (original):
 Total disk cylinders available: 1940 + 2 (reserved cylinders)
 
 Part  TagFlag CylindersSizeBlocks
   0 unassignedwm   0   0 (0/0/0)  0
   1 unassignedwm   0   0 (0/0/0)  0
   2 unassignedwu   1 - 19393.79GB(1939/0/0) 7942144
   3 unassignedwm   0   0 (0/0/0)  0
   4 unassignedwm   0   0 (0/0/0)  0
   5 unassignedwm   0   0 (0/0/0)  0
   6 unassignedwm   0   0 (0/0/0)  0
   7 unassignedwm   0   0 (0/0/0)  0
   8   bootwu   0 -02.00MB(1/0/0)   4096
   9 unassignedwm   0   0 (0/0/0)  0
 
 partition 
 
 Cheers, JP
 
 
# zpool add rpool cache c0t1d0s0
# zpool iostat -v rpool
capacity operationsbandwidth
d  avail   read  write   read  write
--  -  -  -  -  -  -
rpool   11.4G  22.4G  0  0  5.86K 73
   c0t0d0s0  11.4G  22.4G  0  0  5.86K 73
he   -  -  -  -  -  -
   c0t1d0s0  47.8M  33.9G  0 53  18.1K  6.38M
---  -  -  -  -  -  -

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Miles Nordin
 tt == Toby Thain t...@telegraphics.com.au writes:

tt I know this was discussed a while back, but in what sense does
tt tar do any of those things? I understand that it is unlikely
tt to barf completely on bitflips, but won't tar simply silently
tt de-archive bad data?

yeah, I just tested it, and you're right.  I guess the checksums are
only for headers.  However, cpio does store checksums for files'
contents, so maybe it's better to use cpio than tar.  Just be careful
how you invoke it, because there are different cpio formats just like
there are different tar formats, and some might have no or weaker
checksum.

NetBSD 'pax' invoked as tar:
-8-
castrovalva:~$ dd if=/dev/zero of=t0 bs=1m count=1
1+0 records in
1+0 records out
1048576 bytes transferred in 0.022 secs (47662545 bytes/sec)
castrovalva:~$  tar cf t0.tar t0
castrovalva:~$ md5 t0.tar
MD5 (t0.tar) = 591a39a984f70fe3e44a5e13f0ac74b6
castrovalva:~$ tar tf t0.tar
t0
castrovalva:~$ dd of=t0.tar seek=$(( 512 * 1024 )) bs=1 conv=notrunc
asdfasdfasfs
13+0 records in
13+0 records out
13 bytes transferred in 2.187 secs (5 bytes/sec)
castrovalva:~$ md5 t0.tar
MD5 (t0.tar) = 14b3a9d851579d8331a0466a5ef62693
castrovalva:~$ tar tf t0.tar
t0
castrovalva:~$ tar xvf t0.tar
tar: Removing leading / from absolute path names in the archive
t0
tar: ustar vol 1, 1 files, 1054720 bytes read, 0 bytes written in 1 secs 
(1054720 bytes/sec)
castrovalva:~$ hexdump -C t0
  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
0007fe00  61 73 64 66 61 73 64 66  61 73 66 73 0a 00 00 00  |asdfasdfasfs|
0007fe10  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
0010
castrovalva:~$ 
-8-

GNU tar does the same thing.

NetBSD 'pax' invoked as cpio:
-8-
castrovalva:~$ dd if=/dev/zero of=t0 bs=1m count=1
1+0 records in
1+0 records out
1048576 bytes transferred in 0.018 secs (58254222 bytes/sec)
castrovalva:~$ cpio -H sv4cpio -o  t0.cpio
t0
castrovalva:~$ md5 t0.cpio
MD5 (t0.cpio) = d5128381e72ee514ced8ad10a5a33f16
castrovalva:~$ dd of=t0.cpio seek=$(( 512 * 1024 )) bs=1 conv=notrunc
asdfasdfasdf
13+0 records in
13+0 records out
13 bytes transferred in 1.461 secs (8 bytes/sec)
castrovalva:~$ md5 t0.cpio
MD5 (t0.cpio) = b22458669256da5bcb6c94948d22a155
castrovalva:~$ rm t0
castrovalva:~$ cpio -i  t0.cpio
cpio: Removing leading / from absolute path names in the archive
cpio: Actual crc does not match expected crc t0
-8-


pgpoIIGmEkmDv.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-04 Thread Richard Elling
Miles Nordin wrote:
 mm == Michael McKnight michael_mcknigh...@yahoo.com writes:
 

 mm #split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
 mm #cat mypictures.zfssnap.split.a[a-g]  testjoin

 mm But when I compare the checksum of the original snapshot to
 mm that of the rejoined snapshot, I get a different result:

 sounds fine.  I'm not sure why it's failing.

 mm And when I try to restore the filesystem, I get the following
 mm failure: #zfs recv pool_01/test  ./testjoin cannot receive
 mm new filesystem stream: invalid stream (checksum mismatch)

 however, aside from this problem you're immediately having, I think
 you should never archive the output of 'zfs send'.  I think the
 current warning on the wiki is not sufficiently drastic, but when I
 asked for an account to update the wiki I got no answer.  Here are the
 problems, again, with archiving 'zfs send' output:

  * no way to test the stream's integrity without receiving it.
(meaning, to test a stream, you need enough space to store the
stream being tested, plus that much space again.  not practical.)
A test could possibly be hacked up, but because the whole ZFS
software stack is involved in receiving, and is full of assertions
itself, any test short of actual extraction wouldn't be a thorough
test, so this is unlikely to change soon.

  * stream format is not guaranteed to be forward compatible with new
kernels.  and versioning may be pickier than zfs/zpool versions.
   

Backward compatibility is achieved.

  * stream is expanded _by the kernel_, so even if tar had a
forward-compatibility problem, which it won't, you could
hypothetically work around it by getting an old 'tar'.  For 'zfs
send' streams you have to get an entire old kernel, and boot it on
modern hardware, to get at your old stream.
   

An enterprising community member could easily put together a
utility to do a verification.  All of the necessary code is readily
available.

  * supposed to be endian-independent, but isn't.
   

CR 6764193 was fixed in b105
http://bugs.opensolaris.org/view_bug.do?bug_id=6764193
Is there another?

  * stream is ``protected'' from corruption in the following way: if a
single bit is flipped anywhere in the stream, the entire stream and
all incrementals descended from it become worthless.  It is
EXTREMELY corruption-sensitive.  'tar' and zpool images both
detect, report, work around, flipped bits.  The 'zfs send' idea is
different: if there's corruption, the designers assume you can just
restart the 'zfs send | zfs recv' until you get a clean go---what
you most need is ability to atomically roll back the failed recv,
which you do get.  You are not supposed to be archiving it!
   

This is not completely accurate.  Snapshots which are completed
are completed.

  * unresolved bugs.  ``poisonous streams'' causing kernel panics when
you receive them, 
 http://www.opensolaris.org/jive/thread.jspa?threadID=81613tstart=0

 The following things do not have these problems:

  * ZFS filesystems inside file vdev's (except maybe the endian
problem.  and also the needs-whole-kernel problem, but mitigated by
better forward-compatibility guarantees.)
   

Indeed, but perhaps you'll find the grace to file an appropriate RFE?

  * tar files

 In both alternatives you probably shouldn't use gzip on the resulting
 file.  If you must gzip, it would be better to make a bunch of tar.gz
 files, ex., one per user, and tar the result.  Maybe I'm missing some
 magic flag, but I've not gotten gzip to be too bitflip-resilient.

 The wiki cop-out is a nebulous ``enterprise backup ``Solution' ''.
   

Perhaps it would satisfy you to enumerate the market's Enterprise
Backup Solutions?  This might be helpful since Solaris does not
include such software, at least by my definition of Solaris.  So, the wiki
section Using ZFS With Enterprise Backup Solutions does in fact
enumerate them, and I don't see any benefit to repeating the enumeration.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Using_ZFS_With_Enterprise_Backup_Solutions

 Short of that you might make a zpool in a file with zfs compression
 turned on and rsync or cpio or zfs send | zfs recv the data into it.

 Or just use gtar like in the old days.  With some care you may even be
 able to convince tar to write directly to the medium.  And when you're
 done you can do a 'tar t' directly from medium also, to check it.  I'm
 not sure what to do about incrementals.  There is a sort of halfass
 incremental feature in gtar, but not like what ZFS gives.
   

I suggest you consider an Enterprise Backup Solution.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Removing disk from un-mirrrored pool - Status of development?

2009-02-04 Thread Victor Hooi
heya,

I was wondering what's the status on the ability to remove disks from a ZFS 
pool?  It's mentioned in the ZFS faq, as coming soon, and the BigAdmin Xperts 
page on ZFS mentions it as coming out in 2007.

I know this is also discussed elsewhere, e.g.:

http://www.opensolaris.org/jive/thread.jspa?messageID=314684#314684

but that focuses mainly on raid-z.

However, what about the simple case, of an unmirrored pool?

(e.g. a pool containing a 100gb and 250gb disk, for 350gb of storage, with 
usage around 20gb).

Are there any plans to add the ability to remove one of the disks from the 
pool? Is it being developed at all? Or is there some hacky workaround to 
achieve this?

Cheers,
Victor
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-02-04 Thread Nicholas Lee
Not sure is best to put something like this.

There is wikis like
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://wiki.genunix.org/wiki/index.php/WhiteBox_ZFSStorageServer

But I haven't seen anything which has an active community like
http://www.thinkwiki.org/wiki/ThinkWiki
Is it worthwhile to setup a something like this?

Adding together information like:
http://jkontherun.com/2009/02/04/new-192gb-ssd-from-transcend-runs-at-high-speed/


I think would be very useful.



On Thu, Feb 5, 2009 at 6:30 AM, Miles Nordin car...@ivy.net wrote:

 In case you want to start one, here are the (incomplete) notes I've
 kept from the list:

 http://www.vmetro.com/category4304.html -- ZFS log device, prestoserv of
 the future.  also Gigabyte iRAM.
  there are some companies, Crucial and STEC come to mind,
  sell SSDs which fit in disk form factors.  IIRC, Mac Book Air and EMC
  use STEC's  SSDs.  -- Richard Elling
  the ones that Mtron sells were testing as the fastest ones on the market.
  -- Brian Hechinger 2008-06-27
  http://www.xbitlabs.com/articles/storage/display/ssd-iram_5.html
  http://www.stec-inc.com/product/zeusiops.php
  http://www.fusionio.com/

 http://forums.storagereview.net/index.php?s=showtopic=27190view=findpostp=253758
  http://www.hyperossystems.co.uk/
 - Ross
  http://www.opensolaris.org/jive/thread.jspa?threadID=65074tstart=30
 - relling

 http://www.marketwatch.com/news/story/STEC-Support-Suns-Unified-Storage/story.aspx?guid=%7B07043E00-7628-411D-B24A-2FFEC8B8F706%7D
The ZEUS product line makes a fine slog while the MACH8 product line
 works nicely for L2ARC. [from the Sun 7000]
 - Greg Mason
  Intel X25E. It's around $600 It's got about half the performance of STEC
 Zeus drives
 - David Dyer-Bennet
   There's a problem in the MLC SSD drives with particular JMicron
 controller chips where some patterns of small writes trigger ~~10sec.
 hangs, which is most often called stuttering in online discussions of it
 that I've seen.  My particular example is an OCZ Core 2 64GB
 unit.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Marion Hakanson
The zilstat tool is very helpful, thanks!

I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
both via an NFS client, and locally on the X4500 itself.  Over NFS,
said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
going through the ZIL.

When run locally on the X4500, the extract took about 1 second, with
zilstat showing all zeroes.  I wonder if this is a case where that
ZIL bypass kicks in for 32K writes, in the local tar extraction.
Does zilstat's underlying dtrace include these bypass-writes in the
totals it displays?

I think if it's possible to get stats on this bypassed data, I'd like
to see it as another column (or set of columns) in the zilstat output.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Jorgen Lundman

Interesting, but what does it mean :)


The x4500 for mail (NFS vers=3 on ufs on zpool with quotas):

# ./zilstat.ksh
N-Bytes  N-Bytes/s N-Max-Bytes/sB-Bytes  B-Bytes/s B-Max-Bytes/s
 376720 376720 376720128614412861441286144
 419608 419608 419608136806413680641368064
 555256 555256 555256173260817326081732608
 538808 538808 538808167936016793601679360
 626048 626048 626048177356817735681773568
 753824 753824 753824210534421053442105344
 652632 652632 652632171622417162241716224

Fairly constantly between 1-2MB/s. That doesn't sound too bad though. 
It's only got 400 nfsd threads at the moment, but peaks at 1024. 
Incidentally, what is the highest recommended nfsd_threads for a x4500 
anyway?

Lund



Marion Hakanson wrote:
 The zilstat tool is very helpful, thanks!
 
 I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
 both via an NFS client, and locally on the X4500 itself.  Over NFS,
 said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
 going through the ZIL.
 
 When run locally on the X4500, the extract took about 1 second, with
 zilstat showing all zeroes.  I wonder if this is a case where that
 ZIL bypass kicks in for 32K writes, in the local tar extraction.
 Does zilstat's underlying dtrace include these bypass-writes in the
 totals it displays?
 
 I think if it's possible to get stats on this bypassed data, I'd like
 to see it as another column (or set of columns) in the zilstat output.
 
 Regards,
 
 Marion
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Send Receive (and why does 'ls' modify a snapshot?)

2009-02-04 Thread Sanjeev
Tony,

On Wed, Feb 04, 2009 at 09:10:26AM -0800, Tony Galway wrote:
 Thanks ... the -F works perfectly, and provides a further benefit in that the 
 client can mess with the file system as much as they want for testing 
 purposes, but when it comes time to ensure it is synchronized each night, it 
 will revert back to the previous state.

Another option would be to turn off atime if you are sure that you are not
planing to modify anything on the destination box.

But, like you mentioned above if you allow users to mess around with the FS
then -F seems to be a better option.

Regards,
Sanjeev

 
 Thanks
 -Tony
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Richard Elling
Jorgen Lundman wrote:
 Interesting, but what does it mean :)


 The x4500 for mail (NFS vers=3 on ufs on zpool with quotas):

 # ./zilstat.ksh
 N-Bytes  N-Bytes/s N-Max-Bytes/sB-Bytes  B-Bytes/s B-Max-Bytes/s
  376720 376720 376720128614412861441286144
  419608 419608 419608136806413680641368064
  555256 555256 555256173260817326081732608
  538808 538808 538808167936016793601679360
  626048 626048 626048177356817735681773568
  753824 753824 753824210534421053442105344
  652632 652632 652632171622417162241716224

 Fairly constantly between 1-2MB/s. That doesn't sound too bad though. 
   

I think your workload would benefit from a fast, separate log device.

 It's only got 400 nfsd threads at the moment, but peaks at 1024. 
 Incidentally, what is the highest recommended nfsd_threads for a x4500 
 anyway?
   

Highest recommended is what you need to get the job done.
For the most part, the defaults work well.  But you can experiment
with them and see if you can get better results.

I've got some ideas about how to implement some more features
for zilstat, but might not be able to get to it over the next few
days.  So there still time to accept recommendations :-)
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Richard Elling
Marion Hakanson wrote:
 The zilstat tool is very helpful, thanks!

 I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
 both via an NFS client, and locally on the X4500 itself.  Over NFS,
 said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
 going through the ZIL.

 When run locally on the X4500, the extract took about 1 second, with
 zilstat showing all zeroes.  I wonder if this is a case where that
 ZIL bypass kicks in for 32K writes, in the local tar extraction.
 Does zilstat's underlying dtrace include these bypass-writes in the
 totals it displays?
   

This is what I would expect. What you are seeing is the affect of the
NFS protocol and how the server commits data to disk on behalf of
the client -- by using sync writes.


 I think if it's possible to get stats on this bypassed data, I'd like
 to see it as another column (or set of columns) in the zilstat output.
   

Yes.  I've got a few more columns in mind, too.  Does anyone still use
a VT100? :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Jorgen Lundman


Richard Elling wrote:

 # ./zilstat.ksh
 N-Bytes  N-Bytes/s N-Max-Bytes/sB-Bytes  B-Bytes/s B-Max-Bytes/s
  376720 376720 376720128614412861441286144
  419608 419608 419608136806413680641368064
  555256 555256 555256173260817326081732608
  538808 538808 538808167936016793601679360
  626048 626048 626048177356817735681773568
  753824 753824 753824210534421053442105344
  652632 652632 652632171622417162241716224

 Fairly constantly between 1-2MB/s. That doesn't sound too bad though.   
 
 I think your workload would benefit from a fast, separate log device.

Interesting. Today is the first I've heard about it.. one of the x4500 
is really really slow, something like 15 seconds to do an unlink. But I 
assumed it was because the ufs inside zvol is _really_ bloated. Maybe we 
need to experiment with it on the test x4500.


 
 Highest recommended is what you need to get the job done.
 For the most part, the defaults work well.  But you can experiment
 with them and see if you can get better results.

It came shipped with 16. And I'm sorry but 16 didn't cut it at all :) We 
set it at 1024 as it was the highest number I found via Google.

Lund


-- 
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Carsten Aulbert
Hi Richard,

Richard Elling schrieb:
 
 Yes.  I've got a few more columns in mind, too.  Does anyone still use
 a VT100? :-)

Only when using ILOM ;)

(anyone using 72 char/line MUA, sorry to them, the following lines are longer):

Thanks for the great tool, it showed something very interesting yesterday:

s06: TIME   N-MBytes N-MBytes/s N-Max-Rate   B-MBytes 
B-MBytes/s B-Max-Rate
s06: 2009 Feb  4 14:37:11  5  0  0 10  
0  1
s06: 2009 Feb  4 14:37:26  6  0  1 12  
0  1
s06: 2009 Feb  4 14:37:41  4  0  0 10  
0  1
s06: 2009 Feb  4 14:37:56  5  0  1 11  
0  1
s06: 2009 Feb  4 14:38:11  6  0  1 11  
0  2
s06: 2009 Feb  4 14:38:26  7  0  1 13  
0  2
s06: 2009 Feb  4 14:38:41 10  0  2 17  
1  3
s06: 2009 Feb  4 14:38:56  4  0  0  9  
0  1
s06: 2009 Feb  4 14:39:11  5  0  1 11  
0  1
s06: 2009 Feb  4 14:39:26  7  0  0 13  
0  1
s06: 2009 Feb  4 14:39:41  7  0  2 13  
0  3
s06: 2009 Feb  4 14:39:56  6  0  1 11  
0  2
s06: 2009 Feb  4 14:40:11  6  0  1 12  
0  1
s06: 2009 Feb  4 14:40:26  6  0  0 13  
0  1
s06: 2009 Feb  4 14:40:41  5  0  0 10  
0  1
s06: 2009 Feb  4 14:40:56  6  0  1 12  
0  1
s06: 2009 Feb  4 14:41:11  4  0  0  9  
0  1
[..]
so far, the box was almost idle, a little bit later:
s06: 2009 Feb  4 14:53:41  2  0  0  5  
0  0
s06: 2009 Feb  4 14:53:56  1  0  0  3  
0  0
s06: 2009 Feb  4 14:54:11  1  0  0  4  
0  0
s06: 2009 Feb  4 14:54:26  1  0  0  3  
0  0
s06: 2009 Feb  4 14:54:41  2  0  0  5  
0  0
s06: 2009 Feb  4 14:54:56604 40171702 
46198
s06: 2009 Feb  4 14:55:11816 54130939 
62154
s06: 2009 Feb  4 14:55:26  2  0  0  4  
0  0
s06: 2009 Feb  4 14:55:41  2  0  0  4  
0  0
s06: 2009 Feb  4 14:55:56  1  0  0  3  
0  0
s06: 2009 Feb  4 14:56:11  3  0  0  6  
0  1
s06: 2009 Feb  4 14:56:26  1  0  0  3  
0  0
[...]
s06: 2009 Feb  4 16:13:11  1  0  0  3  
0  0
s06: 2009 Feb  4 16:13:26  2  0  0  5  
0  0
s06: 2009 Feb  4 16:13:41389 25 97477 
31119
s06: 2009 Feb  4 16:13:56505 33193599 
39218
s06: 2009 Feb  4 16:14:11  2  0  0  4  
0  0
s06: 2009 Feb  4 16:14:26  3  0  0  5  
0  1
s06: 2009 Feb  4 16:14:41  1  0  0  3  
0  0
s06: 2009 Feb  4 16:14:56  2  0  0  6  
0  1
s06: 2009 Feb  4 16:15:11  4  0  2 10  
0  4
s06: 2009 Feb  4 16:15:26  0  0  0  1  
0  0
s06: 2009 Feb  4 16:15:41128  8 94168 
11123
s06: 2009 Feb  4 16:15:56   1081 72212   1305 
87279
s06: 2009 Feb  4 16:16:11262 17 99317 
21122
s06: 2009 Feb  4 16:16:26  0  0  0  0  
0  0

just showing a few bursts...

Given that this is the output of 'zilstat.ksh  -M -t 15' I guess we should 
really look into 
a fast device for it, right?

Do you have any hint, which numbers are reasonable on a X4500 and which are 
approaching 
serious problems?

Cheers

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why does a file on a ZFS change sizes?

2009-02-04 Thread SQA
Anybody have a guess to the cause of this problem?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss