Re: [zfs-discuss] Checksums

2009-10-26 Thread Ross
Thanks for the update Adam, that's good to hear. Do you have a bug ID number for this, or happen to know which build it's fixed in? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Albert Chin
On Sun, Oct 25, 2009 at 01:45:05AM -0700, Orvar Korvar wrote: I am trying to backup a large zfs file system to two different identical hard drives. I have therefore started two commands to backup myfs and when they have finished, I will backup nextfs zfs send mypool/m...@now | zfs receive

[zfs-discuss] Retrieve per-block checksum algorithm

2009-10-26 Thread Stathis Kamperis
Greetings to everyone. I'm trying to retrieve the checksumming algorithm on a per-block basis with zdb(1M). I know it's supposed to be ran by Sun's support engineers only I take full responsibility for whatever damage I cause to my machine by using it. Now. I created a tank/test filesystem,

Re: [zfs-discuss] Retrieve per-block checksum algorithm

2009-10-26 Thread Victor Latushkin
On 26.10.09 14:25, Stathis Kamperis wrote: Greetings to everyone. I'm trying to retrieve the checksumming algorithm on a per-block basis with zdb(1M). I know it's supposed to be ran by Sun's support engineers only I take full responsibility for whatever damage I cause to my machine by using

Re: [zfs-discuss] Retrieve per-block checksum algorithm

2009-10-26 Thread Stathis Kamperis
2009/10/26 Victor Latushkin victor.latush...@sun.com: On 26.10.09 14:25, Stathis Kamperis wrote: Greetings to everyone. I'm trying to retrieve the checksumming algorithm on a per-block basis with zdb(1M). I know it's supposed to be ran by Sun's support engineers only I take full

Re: [zfs-discuss] Dumb idea?

2009-10-26 Thread erik.ableson
Or in OS X with smart folders where you define a set of search terms and as write operations occur on the known filesystems the folder contents will be updated to reflect the current state of the attached filesystems The structures you defined seemed to be designed around the idea of

Re: [zfs-discuss] Checksums

2009-10-26 Thread Cindy Swearingen
Hi Ross, The CR ID is 6740597: zfs fletcher-2 is losing its carries Integrated in Nevada build 114 and the Solaris 10 10/09 release. This CR didn't get a companion man page bug to update the docs so I'm working on that now. The opensolaris.org site seems to be in the middle of its migration

Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Richard Elling
On Oct 25, 2009, at 1:45 AM, Orvar Korvar wrote: I am trying to backup a large zfs file system to two different identical hard drives. I have therefore started two commands to backup myfs and when they have finished, I will backup nextfs zfs send mypool/m...@now | zfs receive

Re: [zfs-discuss] zfs recv complains about destroyed filesystem

2009-10-26 Thread Robert Milkowski
I created http://defect.opensolaris.org/bz/show_bug.cgi?id=12249 -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Brian
Why does resilvering an entire disk, yield different amounts of data that was resilvered each time. I have read that ZFS only resilvers what it needs to, but in the case of replacing an entire disk with another formatted clean disk, you would think the amount of data would be the same each time

Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Bill Sommerfeld
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote: Why does resilvering an entire disk, yield different amounts of data that was resilvered each time. I have read that ZFS only resilvers what it needs to, but in the case of replacing an entire disk with another formatted clean disk, you would

Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread A Darren Dunham
On Mon, Oct 26, 2009 at 10:24:16AM -0700, Brian wrote: Why does resilvering an entire disk, yield different amounts of data that was resilvered each time. I have read that ZFS only resilvers what it needs to, but in the case of replacing an entire disk with another formatted clean disk, you

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Jeremy Kitchen
Jeremy Kitchen wrote: Hey folks! We're using zfs-based file servers for our backups and we've been having some issues as of late with certain situations causing zfs/zpool commands to hang. anyone? this is happening right now and because we're doing a restore I can't reboot the machine, so

[zfs-discuss] default child filesystem quota

2009-10-26 Thread Tommy McNeely
I may be searching for the wrong thing, but I am trying to figure out a way to set the default quota for child file systems. I tried setting the quota on the top level, but that is not the desired effect. I'd like to limit, by default, newly created filesystems under a certain dataset to 10G

Re: [zfs-discuss] Change physical path to a zpool.

2009-10-26 Thread Jon Aimone
Hi, Simple solution. I did, and it did, and things worked swell! Thanx for the assist. I only wish the failure mode were a little easier to interpret... perhaps I'll try to file an RFE about that... Jürgen Keil spake thusly, on or about 10/24/09 06:53: I have a functional OpenSolaris x64

Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Marion Hakanson
knatte_fnatte_tja...@yahoo.com said: Is rsync faster? As I have understood it, zfs send.. gives me an exact replica, whereas rsync doesnt necessary do that, maybe the ACL are not replicated, etc. Is this correct about rsync vs zfs send? It is true that rsync (as of 3.0.5, anyway) does not

Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-26 Thread Marion Hakanson
opensolaris-zfs-disc...@mlists.thewrittenword.com said: Is it really pointless? Maybe they want the insurance RAIDZ2 provides. Given the choice between insurance and performance, I'll take insurance, though it depends on your use case. We're using 5-disk RAIDZ2 vdevs. . . . Would love to

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Mertol Ozyoney
7.x FW on 2500 and 6000 series doesnot operate the same way as 6.x FW does. So on some/most loads ignore cache synch commands option may not improve performance as expected. Best regards Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Mertol Ozyoney
Hi Bob; In all 2500 and 6000 series you can assign raid set's to a controller and that controller becomes the owner of the set. Generaly not force drives switching between controllers always one controller owns a disk, and other waits in standby. Some disks use ALUA and re-route traffic coming

Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Richard Elling
On Oct 26, 2009, at 11:51 AM, Marion Hakanson wrote: knatte_fnatte_tja...@yahoo.com said: Is rsync faster? As I have understood it, zfs send.. gives me an exact replica, whereas rsync doesnt necessary do that, maybe the ACL are not replicated, etc. Is this correct about rsync vs zfs send?

Re: [zfs-discuss] fishworks on x4275?

2009-10-26 Thread Mertol Ozyoney
Hı Trevor; As can be seen from my email adress and signiture below my answer will be quite biased J To be honest, while converting every X series server with millions of alternative configurations to a Fishwork appliance may not be extremely difficult, it would be impossible to support

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Albert Chin
On Mon, Oct 26, 2009 at 09:58:05PM +0200, Mertol Ozyoney wrote: In all 2500 and 6000 series you can assign raid set's to a controller and that controller becomes the owner of the set. When I configured all 32-drives on a 6140 array and the expansion chassis, CAM automatically split the drives

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Trevor Pretty
Paul Being a script hacker like you the only kludge I can think of. A script that does something like ls /tmp/foo sleep ls /tmp/foo.new diff /tmp/foo /tmp/foo.new /tmp/files_that_have_changed mv /tmp/foo.new /tmp/foo Or you might be able to knock something up with bart nd zfs snapshots.

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar
On 10/25/09 5:38 PM, Paul Archer wrote: 5:12pm, Cyril Plisko wrote: while there is no inotify for Solaris, there are similar technologies available. Check port_create(3C) and gam_server(1) I can't find much on gam_server on Solaris (couldn't find too much on it at all, really), and

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Richard Elling
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor- file(1)? :-) -- richard On Oct 26, 2009, at 3:17 PM, Carson Gaspar wrote: On 10/25/09 5:38 PM, Paul Archer wrote: 5:12pm, Cyril Plisko wrote: while there is no inotify for Solaris, there are similar technologies available.

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar
On 10/26/09 3:31 PM, Richard Elling wrote: How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-file(1)? :-) The docs are... ummm... skimpy is being rather polite. The docs I can find via Google say that they will launch some random unspecified daemons via d-bus (I assume gvfsd ans

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Cindy Swearingen
Hi Jeremy, Can you use the command below and send me the output, please? Thanks, Cindy # mdb -k ::stacks -m zfs On 10/26/09 11:58, Jeremy Kitchen wrote: Jeremy Kitchen wrote: Hey folks! We're using zfs-based file servers for our backups and we've been having some issues as of late with

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Richard Elling
On Oct 26, 2009, at 3:56 PM, Carson Gaspar wrote: On 10/26/09 3:31 PM, Richard Elling wrote: How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor- file(1)? :-) The docs are... ummm... skimpy is being rather polite. The docs I can find via Google say that they will launch some

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Jeremy Kitchen
Cindy Swearingen wrote: Hi Jeremy, Can you use the command below and send me the output, please? Thanks, Cindy # mdb -k ::stacks -m zfs ack! it *just* fully died. I've had our noc folks reset the machine and I will get this info to you as soon as it happens again (I'm fairly

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread paul
I can't find much on gam_server on Solaris (couldn't find too much on it at all, really), and port_create is apparently a system call. (I'm not a developer--if I can't write it in BASH, Perl, or Ruby, I can't write it.) I appreciate the suggestions, but I need something a little more

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar
On 10/26/09 5:33 PM, p...@paularcher.org wrote: I can't find much on gam_server on Solaris (couldn't find too much on it at all, really), and port_create is apparently a system call. (I'm not a developer--if I can't write it in BASH, Perl, or Ruby, I can't write it.) I appreciate the

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-26 Thread Adam Leventhal
With that said I'm concerned that there appears to be a fork between the opensource version of ZFS and ZFS that is part of the Sun/Oracle FishWorks 7nnn series appliances. I understand (implicitly) that Sun (/Oracle) as a commercial concern, is free to choose their own priorities in terms

[zfs-discuss] ZFS near-synchronous replication...

2009-10-26 Thread Mike Watkins
Anyone have any creative solutions for near-synchronous replication between 2 ZFS hosts? Near-synchronous, meaning RPO X---0 I realize performance will take a hit. Thanks, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread David Magda
On Oct 26, 2009, at 20:42, Carson Gaspar wrote: Unfortunately, I'm trying for a Solaris solution. I already had a Linux solution (the 'inotify' I started out with). And we're on a Solaris mailing list, trying to give you solutions that work on Solaris. Don't believe everything you read on

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-26 Thread David Turnbull
I'm having similar issues, with two AOC-USAS-L8i Supermicro 1068e cards mpt2 and mpt3, running 1.26.00.00IT It seems to only affect a specific revision of disk. (???) sd67 Soft Errors: 0 Hard Errors: 127 Transport Errors: 3416 Vendor: ATA Product: WDC WD10EACS-00D Revision: 1A01

Re: [zfs-discuss] ZFS near-synchronous replication...

2009-10-26 Thread Richard Elling
On Oct 26, 2009, at 7:36 PM, Mike Watkins wrote: Anyone have any creative solutions for near-synchronous replication between 2 ZFS hosts? Near-synchronous, meaning RPO X---0 Many Solaris solutions are using AVS for this. But you could use block-level replication from a number of vendors.

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Anil
I haven't tried this, but this must be very easy with dtrace. How come no one mentioned it yet? :) You would have to monitor some specific syscalls... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Nicolas Williams
On Mon, Oct 26, 2009 at 08:53:50PM -0700, Anil wrote: I haven't tried this, but this must be very easy with dtrace. How come no one mentioned it yet? :) You would have to monitor some specific syscalls... DTrace is not reliable in this sense: it will drop events rather than overburden the