Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-09 Thread Gary Mills
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote: gm == Gary Mills mi...@cc.umanitoba.ca writes: gm destroys the oldest snapshots and creates new ones, both gm recursively. I'd be curious if you try taking the same snapshots non-recursively instead, does the pause go

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-09 Thread Gary Mills
On Mon, Mar 08, 2010 at 01:23:10PM -0800, Bill Sommerfeld wrote: On 03/08/10 12:43, Tomas Ögren wrote: So we tried adding 2x 4GB USB sticks (Kingston Data Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the snapshot times down to about 30 seconds. Out of curiosity, how

[zfs-discuss] rpool devaliases

2010-03-09 Thread Tony MacDoodle
Can I create a devalias to boot the other mirror similar to UFS? Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 8, 2010, at 11:46 PM, ольга крыжановская olga.kryzh anov...@gmail.com wrote: tmpfs lacks features like quota and NFSv4 ACL support. May not be the best choice if such features are required. True, but if the OP is looking for those features they are more then unlikely looking for an

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-09 Thread R.G. Keen
Yay! Something where I can contribute! Iam a hardware guy trying to live in a software world, but I think I know how this one works. The reason is that the vendor (ACER) of the mainboard says it is not supported, and I can not get into the bios any more, but osol boots fine and sees 8GB.

Re: [zfs-discuss] Recover rpool

2010-03-09 Thread D. Pinnock
I redirected the console to the serial port and managed to capture the panic information below: SunOS Release 5.11 Version snv_111b 64-bit Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. panic[cpu0]/thread=ff0007c39c60: mutex_enter: bad

Re: [zfs-discuss] Can you manually trigger spares?

2010-03-09 Thread Mark J Musante
On Mon, 8 Mar 2010, Tim Cook wrote: Is there a way to manually trigger a hot spare to kick in? Yes - just use 'zpool replace fserv 12589257915302950264 c3t6d0'. That's all the fma service does anyway. If you ever get your drive to come back online, the fma service should recognize that

Re: [zfs-discuss] rpool devaliases

2010-03-09 Thread Robert Milkowski
On 09/03/2010 13:18, Tony MacDoodle wrote: Can I create a devalias to boot the other mirror similar to UFS? yes ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-09 Thread Richard PALO
I'm curious to know whether the following output : bash-4.0# echo memscrub_scans_done/U | mdb -k memscrub_scans_done: memscrub_scans_done:1985 means that Solaris considers ECC memory is effectively installed (the fact that it is non-zero)? I have installed unbuffered

Re: [zfs-discuss] Should ZFS write data out when disk are idle

2010-03-09 Thread Damon Atkins
I am talking about having a write queue, which points to ready to write, full stripes. Ready to write full stripes would be *The last byte of the full stripe has been updated. *The file has been closed for writing. (Exception to the above rule) I believe there is now a scheduler for ZFS, to

Re: [zfs-discuss] Recover rpool

2010-03-09 Thread D. Pinnock
When I boot from a snv133 live cd and attempt to import the rpool it panics with this output: Sun Microsystems Inc. SunOS 5.11 snv_133 February 2010 j...@opensolaris:~$ pfexec su Mar 9 03:11:37 opensolaris su: 'su root' succeeded for jack on /dev/console j...@opensolaris:~# zpool import

[zfs-discuss] new video: George Wilson on ZFS Dedup

2010-03-09 Thread Deirdre Straughan
Brand new video! George Wilson on ZFS Dedup - Oracle Solaris Video http://bit.ly/b5MMpn -- best regards, Deirdré Straughan Solaris Technical Content blog: Un Posto al Sole http://blogs.sun.com/deirdre/ ___ zfs-discuss mailing list

[zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread Harry Putnam
[I hope this isn't a repost double whammy. I posted this message under `Message-ID: 87fx4ai5sp@newsguy.com' over 15 hrs ago but it never appeared on my nntp server (gmane) far as I can see] I'm a little at a loss here as to what to do about these two errors that turned up during a scrub.

[zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread Harry Putnam
I'm a little at a loss here as to what to do about these two errors that turned up during a scrub. The discs involved are a matched pair in mirror mode. zpool status -v z3 (wrapped for mail): ---- ---=--- - scrub: scrub completed after

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Matt Cowger
Ross is correct - advanced OS features are not required here - just the ability to store a file - don’t even need unix style permissions -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ross Walker Sent: Tuesday,

Re: [zfs-discuss] Recover rpool

2010-03-09 Thread D. Pinnock
Found a site that recommended setting the following system file entries set zfs:zfs_recover=1 set aok=1 and running this command zdb -e -bcsvL rpool but I get the following error: Traversing all blocks to verify checksums ... out of memory -- generating core dump Abort The laptop has 4GB of

[zfs-discuss] and another video: ZFS Dynamic LUN Expansion

2010-03-09 Thread Deirdre Straughan
And another brand-new video: ZFS Dynamic LUN Expansion - Oracle Solaris Video http://bit.ly/cwwCZl -- best regards, Deirdré Straughan Solaris Technical Content blog: Un Posto al Sole http://blogs.sun.com/deirdre/ ___ zfs-discuss mailing list

Re: [zfs-discuss] Recover rpool

2010-03-09 Thread Cindy Swearingen
Hi D, Is this a 32-bit system? We were looking at your panic messages and they seem to indicate a problem with memory and not necessarily a problem with the pool or the disk. Your previous zpool status output also indicates that the disk is okay. Maybe someone with similar recent memory

Re: [zfs-discuss] Using zfs-auto-snapshot for automatic backups

2010-03-09 Thread Brandon High
On Mon, Mar 8, 2010 at 1:47 PM, Tim Foster tim.fos...@sun.com wrote: Looking at the errors, it looks like SMF isn't exporting the values for action_authorization or value_authorization in the SMF manifest it produces, resulting the service not being allowed to set values in svccfg when it runs

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Richard Elling
On Mar 9, 2010, at 9:40 AM, Matt Cowger wrote: Ross is correct - advanced OS features are not required here - just the ability to store a file - don’t even need unix style permissions KISS. Just use tmpfs, though you might also consider limiting its size. -- richard ZFS storage and

Re: [zfs-discuss] Recover rpool

2010-03-09 Thread Tim Haley
On 03/ 9/10 10:53 AM, Cindy Swearingen wrote: Hi D, Is this a 32-bit system? We were looking at your panic messages and they seem to indicate a problem with memory and not necessarily a problem with the pool or the disk. Your previous zpool status output also indicates that the disk is okay.

[zfs-discuss] about zfs exported on nfs

2010-03-09 Thread Harry Putnam
[First, a brief apology. I inadvertently posted this message to the `general' group when it should have been to the `zfs' group. In that last few days I seem to be all thumbs when posting.. and have created several bumbling posts to opensolaris lists. ] summary: A zfs fs set with smb and nfs

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Matt Cowger
That's a very good point - in this particular case, there is no option to change the blocksize for the application. On 3/9/10 10:42 AM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: I think This is highlighting that there is extra CPU requirement to manage small blocks in ZFS. The table

[zfs-discuss] backup zpool to tape

2010-03-09 Thread Gregory Durham
Hello all, I need to backup some zpools to tape. I currently have two servers, for the purpose of this conversation we will call them server1 and server2 respectively. Server1, has several zpools which are replicated to a single zpool on server2 through a zfs send/recv script. This part works

Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-09 Thread Thomas W
Okay... I found the solution to my problem. And it has nothing to do with my hard drives... It was the Realtek NIC drivers. I read about problems and added a new driver (I got that from the forum thread). And now I have about 30MB/s read and 25MB/s write performance. That's enough (for the

Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread Cindy Swearingen
Hi Harry, Reviewing other postings where permanent errors where found on redundant ZFS configs, one was resolved by re-running the zpool scrub and one resolved itself because the files with the permanent errors were most likely temporary files. One of the files with permanent errors below is

Re: [zfs-discuss] backup zpool to tape

2010-03-09 Thread Miles Nordin
gd == Gregory Durham gregory.dur...@gmail.com writes: gd it to mount on boot I do not understand why you have a different at-boot-mounting problem with and without lofiadm: either way it's your script doing the importing explicitly, right? so just add lofiadm to your script. I guess you

Re: [zfs-discuss] Recover rpool

2010-03-09 Thread D. Pinnock
My Laptop is a 64bit system Dell Latitude D630 Intel Core2 Duo Processor T7100 4GB RAM -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread Harry Putnam
Cindy Swearingen cindy.swearin...@sun.com writes: Hi Harry, Reviewing other postings where permanent errors where found on redundant ZFS configs, one was resolved by re-running the zpool scrub and one resolved itself because the files with the permanent errors were most likely temporary

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais roch.bourbonn...@sun.com wrote: I think This is highlighting that there is extra CPU requirement to manage small blocks in ZFS. The table would probably turn over if you go to 16K zfs records and 16K reads/writes form the application. Next

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread ольга крыжановская
Could you retest it with mmap() used? Olga 2010/3/9 Matt Cowger mcow...@salesforce.com: It can, but doesn't in the command line shown below. M On Mar 8, 2010, at 6:04 PM, ольга крыжановская olga.kryzh anov...@gmail.com wrote: Does iozone use mmap() for IO? Olga On Tue, Mar 9, 2010

Re: [zfs-discuss] backup zpool to tape

2010-03-09 Thread Greg
Thank you for such a thorough look into my issue. As you said, I guess I am down to trying to backup to a zvol and then backing that up to tape. Has anyone tried this solution? I would be very interested to find out. Anyone else with any other solutions? Thanks! Greg -- This message posted

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Matt Cowger
This is a good point, and something that I tried. I limited the ARC to 1GB and 4GB (both well within the memory footprint of the system even with the ramdisk).equally poor resultsthis doesn't feel like ARC righting with locked memory pages. --M -Original Message- From: Ross

Re: [zfs-discuss] Should ZFS write data out when disk are idle

2010-03-09 Thread Damon Atkins
Sorry, Full Stripe on a RaidZ is the recordsize ie if the record size is 128k on a RaidZ and its made up of 5 disks, then 128k is spread across 4 disks with the calc parity on the 5 disk, which means the writes are 32k to each disk. For a RaidZ, when data is written to a disk, are individual

[zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-09 Thread mingli
Hi All, I had create a ZFS filesystem test and shared it with zfs set sharenfs=root=host1 test, and I checked the sharenfs option and it already update to root=host1: bash-3.00# zfs get sharenfs test

Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread David Dyer-Bennet
On 3/9/2010 4:57 PM, Harry Putnam wrote: Also - it appears `zpool scrub -s z3' doesn't really do anything. The status report above is taken immediately after a scrub command. The `scub -s' command just returns the prompt... no output and apparently no scrub either. The -s switch is

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-09 Thread Dennis Clarke
Hi All, I had create a ZFS filesystem test and shared it with zfs set sharenfs=root=host1 test, and I checked the sharenfs option and it already update to root=host1: Try to use a backslash to escape those special chars like so : zfs set

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-09 Thread rwalists
On Mar 8, 2010, at 7:55 AM, Erik Trimble wrote: Assume your machine has died the True Death, and you are starting with new disks (and, at least a similar hardware setup). I'm going to assume that you named the original snapshot 'rpool/ROOT/whate...@today' (1) Boot off the OpenSolaris

Re: [zfs-discuss] Should ZFS write data out when disk are idle

2010-03-09 Thread Richard Elling
On Mar 9, 2010, at 6:13 PM, Damon Atkins wrote: Sorry, Full Stripe on a RaidZ is the recordsize ie if the record size is 128k on a RaidZ and its made up of 5 disks, then 128k is spread across 4 disks with the calc parity on the 5 disk, which means the writes are 32k to each disk.

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-09 Thread mingli
And I update the sharenfs option with rw,ro...@100.198.100.0/24, it works fine, and the NFS client can do the write without error. Thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-09 Thread Richard PALO
Hi, thanks for the reply... I guess I'm so far as well, but my question is targetted at understanding the realworld implication of the kernel software memory scrubber. That is, in looking through the code a bit I notice that if hardware ECC is active the software scrubber is disabled. It is

Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-09 Thread Victor Latushkin
Christian Hessmann wrote: Victor, Btw, they affect some files referenced by snapshots as 'zpool status -v' suggests: tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v tank/d...@2010025100:/Payback.m4v tank/d...@2010025100:/TheManWhoWasntThere.m4v In case of OpenSolaris it is