[zfs-discuss] zfs acl support in rsync?

2006-07-20 Thread Peter Eriksson
Has anyone looked into adding support for ZFS ACLs into Rsync? It would be really convenient if it would support transparent conversions from old-style Posix ACLs to ZFS ACLs on the fly One way Posix-ZFS is probably good enough. I've tried Googling, but haven't come up with much. There

[zfs-discuss] Re: zfs acl support in rsync?

2006-07-20 Thread Peter Eriksson
... and in a related question - since rsync uses the ACL code from the Samba project - has there been some progress in that direction too? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] How to backup/clone all filesystems *and* snapshots in a zpool?

2006-11-16 Thread Peter Eriksson
Suppose I have a server that is used as a backup system fom many other (live) servers. It uses ZFS snapshots to enable people to recover files from any date a year back (or so). Now, I want to backup this backup server to some kind of external stable storage in case disaster happens and this

[zfs-discuss] Re: ZFS goes catatonic when drives go dead?

2006-11-22 Thread Peter Eriksson
There is nothing in the ZFS FAQ about this. I also fail to see how FMA could make any difference since it seems that ZFS is deadlocking somewhere in the kernel when this happens... It works if you wrap all the physical devices inside SVM metadevices and use those for your ZFS/zpool instead.

[zfs-discuss] Re: ZFS related kernel panic

2006-12-04 Thread Peter Eriksson
If you take a look at these messages the somewhat unusual condition that may lead to unexpected behaviour (ie. fast giveup) is that whilst this is a SAN connection it is achieved through a non- Leadville config, note the fibre-channel and sd references. In a Leadville compliant

[zfs-discuss] Re: Re: ZFS related kernel panic

2006-12-05 Thread Peter Eriksson
So ZFS should be more resilient against write errors, and the SCSI disk or FC drivers should be more resilient against LIPs (the most likely cause of your problem) or other transient errors. (Alternatively, the ifp driver should be updated to support the maximum number of targets on a

[zfs-discuss] Re: Re: ZFS related kernel panic

2006-12-05 Thread Peter Eriksson
Hmm... I just noticed this qla2100.conf option: # During link down conditions enable/disable the reporting of # errors. #0 = disabled, 1 = enable hba0-link-down-error=1; hba1-link-down-error=1; I _wonder_ what might possibly happen if I change that 1 to a 0 (zero)... :-) This message

[zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Peter Eriksson
too much of our future roadmap, suffice it to say that one should expect much, much more from Sun in this vein: innovative software and innovative hardware working together to deliver world-beating systems with undeniable economics. Yes please. Now give me a fairly cheap (but still quality)

[zfs-discuss] Re: Re: Thumper Origins Q

2007-01-24 Thread Peter Eriksson
#1 is speed. You can aggregate 4x1Gbit ethernet and still not touch 4Gb/sec FC. #2 drop in compatibility. I'm sure people would love to drop this into an existing SAN #2 is the key for me. And I also have a #3: FC has been around a long time now. The HBAs and Switches are (more or less

[zfs-discuss] Re: Re: Re: Thumper Origins Q

2007-01-25 Thread Peter Eriksson
ZFS - HBA - FC Switch - JBOD - Simple FC-SATA-converter - SATA disk Why bother with switch here? Think multiple JBODs. With a single JBOD then a switch is not needed and then FC probably also is overkill - then normal SCSI can work. - Peter Message was edited by: pen This

[zfs-discuss] Re: multihosted ZFS

2007-01-26 Thread Peter Eriksson
If you _boot_ the original machine then it should see that the pool now is owned by the other host and ignore it (you'd have to do a zpool import -f again I think). Not tested though so don't take my word for it... However if you simply type go and let it continue from where it was then things

[zfs-discuss] Large ZFS-bug...

2007-03-20 Thread Peter Eriksson
A coworker of mine ran into a large ZFS-related bug the other day. He was trying to install Sun Studio 11 on a ZFS filesystem and it just kept on failing. Then he tried to install on a UFS filesystem on the same machine and it worked just fine... After much headscratching and testing and

[zfs-discuss] Re: Large ZFS-bug...

2007-03-21 Thread Peter Eriksson
Ah :-) Btw, that bug note is a bit misleading - our usage case had nothing to do with ZFS Root filesystems - he was trying to install in a completely separate filesystem - a very large one. And yes, he found out that setting a quota was a good workaround :-) This message posted from

[zfs-discuss] Best way to migrate filesystems to ZFS?

2007-04-03 Thread Peter Eriksson
I'm about to start migrating a lot of files on UFS filesystems from a Solaris 9 server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now... What's the best way to move all these files? Should one use Solaris tar, Solaris cpio, ufsdump/ufsrestore, rsync or what? I currently use

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-15 Thread Peter Eriksson
Speaking of error recovery due to bad blocks - anyone know if the SATA disks that are delivered with the Thumper have enterprise or desktop firmware/settings by default? If I'm not mistaken one of the differences is that the enterrprise variant more quickly gives up with bad blocks and reports

[zfs-discuss] Thumper with many NFS-export ZFS filesystems

2007-12-12 Thread Peter Eriksson
[0] andromeda:/2common/sge# wc /etc/dfs/sharetab 18537412 157646 /etc/dfs/sharetab This machine (Thumper) currently runs Solaris 10 Update 3 (with some patches) and things work just fine. Now, I'm a bit worried about reboot times due to the number of exported filesystems and I'm thinking

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-12-29 Thread Peter Eriksson
Still no news when a real patch will be released for this issue? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [perf-discuss] help diagnosing system hang

2009-07-07 Thread Peter Eriksson
Interresting... I wonder what differs between your system and mine. With my dirt-simple stress-test: server1# zpool create X25E c1t15d0 server1# zfs set sharenfs=rw X25E server1# chmod a+w /X25E server2# cd /net/server1/X25E server2# gtar zxf /var/tmp/emacs-22.3.tar.gz and a fully patched

Re: [zfs-discuss] surprisingly poor performance

2009-07-08 Thread Peter Eriksson
You might wanna try one thing I just noticed - wrap the log device inside a SVM (disksuite) metadevice - makes wonders for the performance on my test server (Sun Fire X4240)... I do wonder what the downsides might be (except for having to fiddle with Disksuite again). Ie: # zpool create TEST

Re: [zfs-discuss] surprisingly poor performance

2009-07-08 Thread Peter Eriksson
Oh, and for completeness: If I wrap 'c1t12d0s0' inside a SVM metadevice to and use that to create the TEST zpool (without a log) I run the same test command in 36.3 seconds... Ie: # metadb -f -a -c3 c1t13d0s0 # metainit d0 1 1 c1t13d0s0 # metainit d2 1 1 c1t12d0s0 # zpool create TEST

[zfs-discuss] Issues with ZFS and SVM?

2009-07-09 Thread Peter Eriksson
I wonder exactly what's going on. Perhaps it is the cache flushes that is causing the SCSI errors when trying to use the SSD (Intel X25-E and X25-M) disks? Btw, I'm seeing the same behaviour on both an X4500 (SATA/Marwell controller) and the X4240 (SAS/LSI controller). Well, almost. On the

[zfs-discuss] LVM and ZFS

2009-07-29 Thread Peter Eriksson
I'm curious about if there are any potential problems with using LVM metadevices as ZFS zpool targets. I have a couple of situations where using a device directly by ZFS causes errors on the console about Bus and lots of stalled I/O. But as soon as I wrap that device inside an LVM metadevice

Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-24 Thread Peter Eriksson
You can zpool replace a bad slog device now. From which kernel release is this implemented/working? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] [fm-discuss] self test failure on Intel X25-E SSD

2009-09-09 Thread Peter Eriksson
Done some more testing, and I think my X4240/mpt/X25-problems must be something else. Attempting to read (with smartctl) the self test log on the 8850-firmware X25-E gives better results than with the old firmware: X25-E running firmware 8850 on an X4240 with mpt controller: # smartctl -d

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-14 Thread Peter Eriksson
I can confirm that on an X4240 with the LSI (mpt) controller: X25-M G1 with 8820 still returns invalid selftest data X25-E G1 with 8850 now returns correct selftest data (I haven't got any X25-M G2) Going to replace an X25-E with the old firmware in one of our X4500s soon and we'll see if things

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-14 Thread Peter Eriksson
Now tested a firmware 8850 X25-E in one of our X4500:s and things look better: # /ifm/bin/smartctl -d scsi -l selftest /dev/rdsk/c5t7d0s0 smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ No self-tests have been

Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Peter Eriksson
Have you tried wrapping your disks inside LVM metadevices and then used those for your ZFS pool? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Peter Eriksson
What type of disks are you using? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS confused about disks?

2010-02-01 Thread Peter Eriksson
I'm trying to put an older Fibre Channel RAID (Fujitsu Siemens S80) box into use again with ZFS on a Solaris 10 (Update 8) system, but it seems ZFS gets confused about which disk (LUN) is which... Back in the old days when we used these disk systems on another server we had problems with

Re: [zfs-discuss] ZFS confused about disks?

2010-02-18 Thread Peter Eriksson
I figured I'd post the solution to this problem here also. Anyway, I solved the problem the old-fashioned way: Tell Solaris to fake the disk device ID's... I added the following to /kernel/drv/ssd.conf: ssd-config-list= EUROLOGC, unsupported-hack; unsupported-hack=1,0x8,0,0,0,0,0;

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Peter Eriksson
What kind of overhead do we get from this kind of thing? Overheadache... [i](Tack Kronberg för svaret)[/i] -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2010-06-10 Thread Peter Eriksson
Just a quick followup that the same issue still seems to be there on our X4500s with the latest Solaris 10 with all the latest patches and the following SSD disks: Intel X25-M G1 firmware 8820 (80GB MLC) Intel X25-M G2 firmware 02HD (160GB MLC) However - things seem to work smoothly with:

[zfs-discuss] Sharing root and cache on same SSD?

2010-06-10 Thread Peter Eriksson
Are there any potential problems that one should be aware of if you would like to make dual-use of a pair of SSD MLC units and use parts of them as mirrored (ZFS) boot disks, and then use the rest of them as ZFS L2ARC cache devices (for another zpool)? The one thing I can think of is potential