Re: [zfs-discuss] inodes in snapshots

2010-05-28 Thread Chris Gerhard
Just to close this.  It turns out you can't get the crtime over NFS so without 
access to the NFS server there is only limited checking that can be done.

I filed 

CR 6956379  Unable to open extended attributes or get the crtime of files in 
snapshots over NFS.

--chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] creating a fast ZIL device for $200

2010-05-28 Thread Marty Scholes
I have a Sun A5000, 22x 73GB 15K disks in split-bus configuration, two dual 2Gb 
HBAs and four fibre cables from server to array, all for just under $200.

The array gives 4Gb of aggregate thoughput in each direction across two 11 disk 
buses.

Right now it is the main array, but when we outgrow its storage it will become 
a multiple external ZIL / L2ARC array for a slow sata array.

Admittedly, it is rare for all of the pieces to come together at the right 
price like this and since it is unsupported no one would seriously consider it 
for production.

At the same time, it makes blistering main storage today and will provide for 
amazing iops against slow storage later.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can I change copies to 2 *after* I have copied a bunch of files?

2010-05-28 Thread Thanassis Tsiodras
I've read on the web that copies=2 affects only the files copied *after* I have 
changed the setting - does this mean I have to ...

bash$ cat /tmp/stupid.sh
cp $1 /tmp/ || exit 1
rm $1 || exit 1
cp /tmp/$(basename $1) $1 || exit 1

bash$ gfind /path/to/... -type f -exec /tmp/stupid.sh '{}' '\;'

Is there no better solution?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool iostat question

2010-05-28 Thread melbogia
Following is the output of zpool iostat -v. My question is regarding the 
datapool row and the raidz2 row statistics. The datapool row statistic write 
bandwidth is 381 which I assume takes into account all the disks - although it 
doesn't look like it's an average. The raidz2 row static write bandwidth is 
36, which is where I am confused. What exactly does that number and how is it 
calculated?

Also what is the difference between datapool row and the raidz2 row in general?

#zpool iostat -v
capacity operations bandwidth
pool used avail read write read write
 - - - - - -
datapool 201K 5.00T 0 0 33 381
raidz2 201K 5.00T 0 0 1 36
c7t2d0 - - 0 0 47 352
c7t3d0 - - 0 0 47 352
c7t4d0 - - 0 0 47 352
c7t5d0 - - 0 0 47 352
c8t1d0 - - 0 0 47 352
c8t2d0 - - 0 0 47 352
c8t3d0 - - 0 0 47 352
c8t4d0 - - 0 0 47 353
c8t5d0 - - 0 0 31 352
c8t6d0 - - 0 0 47 352
c8t7d0 - - 0 0 47 352
c7t6d0(slog) 0 149G 0 0 31 345
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Gregory J. Benscoter
After looking through the archives I haven't been able to assess the 
reliability of a backup procedure which employs zfs send and recv. Currently 
I'm attempting to create a script that will allow me to write a zfs stream to a 
tape via tar like below.

# zfs send -R p...@something | tar -c  /dev/tape

I'm primarily concerned with in the possibility of a bit flop. If this occurs 
will the stream be lost? Or will the file that that bit flop occurred in be the 
only degraded file? Lastly how does the reliability of this plan compare to 
more traditional backup tools like tar, cpio, etc...?

Thank you for any future help
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Bob Friesenhahn

On Fri, 28 May 2010, Gregory J. Benscoter wrote:


I’m primarily concerned with in the possibility of a bit flop. If 
this occurs will the stream be lost? Or will the file that that bit 
flop occurred in be the only degraded file? Lastly how does the 
reliability of this plan compare to more traditional backup tools 
like tar, cpio, etc…?


The whole stream will be rejected if a single bit is flopped.  Tar and 
cpio will happily barge on through the error.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS synchronous and asynchronous I/O

2010-05-28 Thread Lutz Schumann
Hello ZFS guru's on the list :)

I started ZFS approx 1.something years ago and I'm following the discussions 
here for some time now.  What confused my all the time is the different 
parameters and ZFS tunables and how they affect data integrity and 
availability. 

Now I took some time and tried to visualize how sync / async I/O handling in 
ZFS is done, taking the different parameters like zfs_immediate_write_sz into 
account. I also tried to integrate the latest ZIL related PS ARC's into the 
picture (logbias in 122 and zil synchronicity in 140). 

The result of this is the image at 
http://www.storageconcepts.de/fileadmin/StorageConcept_Data/images/ZFS_IO_Explained.pdf.
 This pdf tries to give a high level overview of how ZFS I/O is handled.

However as I say - I'm with ZFS for more then 1 year only, so I'm probably 
missing something. 

It would be great if some of the ZFS guys here could comment on the graphics 
and provide feedback if the things shown in the PDF are correct. 

Thank you, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] expand zfs for OpenSolaris running inside vm

2010-05-28 Thread me
hello, all

I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?

I cannot use autoexpand because it doesn't implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?

Doing following:

o added new virtual HDD (it becomes /dev/rdsk/c7d1s0)
o run format, write label

# zpool status
  pool: rpool
 state: ONLINE
 scrub: scrub completed after 0h10m with 0 errors on Fri May 28 16:47:05
2010
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c7d0s0ONLINE   0 0 0

errors: No known data errors

# zpool add rpool c7d1
cannot label 'c7d1': EFI labeled devices are not supported on root pools.

# prtvtoc /dev/rdsk/c7d0s0 | fmthard -s - /dev/rdsk/c7d1s0
fmthard:  New volume table of contents now in place.

# zpool add rpool c7d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7d1s0 overlaps with /dev/dsk/c7d1s2

# zpool add -f rpool c7d1s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate
logs

o omg, i tried all the magic command that i found at internet and in tfm.
now writing to maillist :-). Help!

-- 
Dmitry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [zfs/zpool] hang at boot

2010-05-28 Thread schatten
Hi,

whenever I create a new zfs my PC hangs at boot. Basically where the login 
screen should appear. After booting from livecd and removing the zfs the boot 
works again.
This also happened when I created a new zpool for the other half of my HDD.
Any idea why? How to solve it?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] expand zfs for OpenSolaris running inside vm

2010-05-28 Thread Cindy Swearingen

Hi--

I can't speak to running a ZFS root pool in a VM, but the problem is
that you can't add another disk to a root pool. All the boot info needs
to be contiguous. This is a boot limitation.

I've not attempted either of these operations in a VM but you might
consider:

1. Replacing the root pool disk with a larger disk
2. Attaching a larger disk to the root pool and then detaching
 the smaller disk

I like #2 best. See this section in the ZFS troubleshooting wiki:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

Thanks,

Cindy

On 05/28/10 12:54, me wrote:

hello, all

I am have constraint disk space (only 8GB) while running os inside vm. 
Now i want to add more. It is easy to add for vm but how can i update fs 
in os?


I cannot use autoexpand because it doesn't implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?

Doing following:

o added new virtual HDD (it becomes /dev/rdsk/c7d1s0)
o run format, write label

# zpool status
  pool: rpool
 state: ONLINE
 scrub: scrub completed after 0h10m with 0 errors on Fri May 28 16:47:05 
2010

config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c7d0s0ONLINE   0 0 0

errors: No known data errors

# zpool add rpool c7d1
cannot label 'c7d1': EFI labeled devices are not supported on root pools.

# prtvtoc /dev/rdsk/c7d0s0 | fmthard -s - /dev/rdsk/c7d1s0
fmthard:  New volume table of contents now in place.

# zpool add rpool c7d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7d1s0 overlaps with /dev/dsk/c7d1s2

# zpool add -f rpool c7d1s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate 
logs


o omg, i tried all the magic command that i found at internet and in 
tfm. now writing to maillist :-). Help!


--
Dmitry




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool vdev's

2010-05-28 Thread Vadim Comanescu
In a stripe zpool configuration (no redundancy) is a certain disk regarded
as an individual vdev or do all the disks in the stripe represent a single
vdev ? In a raidz configuration im aware that every single group of raidz
disks is regarded as a top level vdev but i was wondering how is it in the
case i mentioned earlier. Thanks.

-- 
ing. Vadim Comanescu
S.C. Syneto S.R.L.
str. Vasile Alecsandri nr 2, Timisoara
Timis, Romania
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-05-28 Thread Alan Wright

On 05/27/10 09:49 PM, Haudy Kazemi wrote:

Brandon High wrote:

On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh cp...@pppl.gov wrote:

I was wondering if there is a special option to share out a set of
nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system, mydir1 shows up, and I can see mydir2, but nothing in
mydir2.
mydir1 and mydir2 are each a zfs filesystem, each shared with the proper
sharenfs permissions.
Did I miss a browse or traverse option somewhere?


What kind of client are you mounting on? Linux clients don't properly
follow nested exports.

-B


This behavior is not limited to Linux clients nor to nfs shares. I've
seen it with Windows (SMB) clients and CIFS shares. The CIFS version is
referenced here:

Nested ZFS Filesystems in a CIFS Share
http://mail.opensolaris.org/pipermail/cifs-discuss/2008-June/000358.html
http://bugs.opensolaris.org/view_bug.do?bug_id=6582165

Is there any commonality besides the observed behaviors?


No, the SMB/CIFS share limitation is that we have not yet added
support for child mounts over SMB; this is completely unrelated
to any configuration problems encountered with NFS.

Alan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool vdev's

2010-05-28 Thread Mark Musante

On 28 May, 2010, at 17.21, Vadim Comanescu wrote:

 In a stripe zpool configuration (no redundancy) is a certain disk regarded as 
 an individual vdev or do all the disks in the stripe represent a single vdev 
 ? In a raidz configuration im aware that every single group of raidz disks is 
 regarded as a top level vdev but i was wondering how is it in the case i 
 mentioned earlier. Thanks.

In a stripe config, each disk is considered a top-level vdev.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Juergen Nickelsen
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

 On Fri, 28 May 2010, Gregory J. Benscoter wrote:
 
 I’m primarily concerned with in the possibility of a bit flop. If 
 this occurs will the stream be lost? Or will the file that that bit 
 flop occurred in be the only degraded file? Lastly how does the 
 reliability of this plan compare to more traditional backup tools 
 like tar, cpio, etc…?

 The whole stream will be rejected if a single bit is flopped.  Tar and 
 cpio will happily barge on through the error.

That is one of the reasons why we at work do send/recv only into
live ZFS file systems -- any error would become apparent
immediately. Not that we have seen that happen yet, and I alone have
been doing hourly sends/recvs for years with a growing number of ZFS
file systems, over a hundred in between.

-- 
Herr Rowohlt, Sie schrieben einmal, bei Schwaebisch ziehe sich Ihnen
das Skrotum zusammen. Isch des im Augebligg au dr Fall?
 -- Verena Schmidt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I change copies to 2 *after* I have copied a bunch of files?

2010-05-28 Thread Brandon High
On Fri, May 28, 2010 at 9:04 AM, Thanassis Tsiodras ttsiod...@gmail.com wrote:
 Is there no better solution?

If you don't care about snapshots, you can also create a new dataset
and move or copy the files to it.

If you do care about snapshots, you can send/recv the dataset, which
will apply the copies property and save all your snapshots. I did this
when I un-dedup'd (redup'd?) some datasets.

Create a target path and create copy of the dataset in it. I set the
target path mountpoint to legacy to keep the new copy from trying to
mount or share automatically.

zfs create -o mountpoint=legacy tank/copies-foo
zfs send -R tank/foo/b...@now | zfs rcv -e tank/new-foo

then do a renaming monte on datasets.

zfs rename tank/foo/bar tank/copies-foo/bar-old
zfs rename tank/copies-foo/bar tank/foo/bar

If you're brave, you can destroy your source and just rename your new
copy into place instead.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Richard Elling
On May 28, 2010, at 4:28 PM, Juergen Nickelsen wrote:
 Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
 On Fri, 28 May 2010, Gregory J. Benscoter wrote:
 
 I’m primarily concerned with in the possibility of a bit flop. If 
 this occurs will the stream be lost? Or will the file that that bit 
 flop occurred in be the only degraded file? Lastly how does the 
 reliability of this plan compare to more traditional backup tools 
 like tar, cpio, etc…?
 
 The whole stream will be rejected if a single bit is flopped.  Tar and 
 cpio will happily barge on through the error.
 
 That is one of the reasons why we at work do send/recv only into
 live ZFS file systems -- any error would become apparent
 immediately. Not that we have seen that happen yet, and I alone have
 been doing hourly sends/recvs for years with a growing number of ZFS
 file systems, over a hundred in between.

A high quality tape archive system will work well, too. These have been
in use for almost 60 years .  With the new tape technology delivering 50TB
per tape, they may become more relevant to today's data needs.
 -- richard

-- 
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-28 Thread schatten
Please help. :(
I use snv134
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss