Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Dick Davies
On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote: Client A - import pool make couple-o-changes Client B - import pool -f (heh) Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80: Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice] assertion failed: dmu_read(os,

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Louwtjie Burger
Would it be easier to ... 1) Change ZFS code to enable a sort of directIO emulation and then run various tests... or 2) Use Sun's performance team, which have all the experience in the world when it comes to performing benchmarks on Solaris and Oracle .. + a Dtrace master to drill down and see

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Ben Rockwood
Dick Davies wrote: On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote: Client A - import pool make couple-o-changes Client B - import pool -f (heh) Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80: Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice]

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Victor Engle
Wouldn't this be the known feature where a write error to zfs forces a panic? Vic On 10/4/07, Ben Rockwood [EMAIL PROTECTED] wrote: Dick Davies wrote: On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote: Client A - import pool make couple-o-changes Client B - import

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Nathan Kroenert
I think it's a little more sinister than that... I'm only just trying to import the pool. Not even yet doing any I/O to it... Perhaps it's the same cause, I don't know... But I'm certainly not convinced that I'd be happy with a 25K, for example, panicing just because I tried to import a dud

[zfs-discuss] zones on zfs

2007-10-04 Thread Neal Miskin
Hi I have a Netra T1 with 2 int disks. I want to install Sol 10 8/07 and build 2 zones (one as an ftp server and one as an scp server) and would like the system mirrored. My thoughts are to use SVM to mirror the / partitions, then build a mirrored zfs pool using slice 5 on both disks (I know

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Victor Engle
Perhaps it's the same cause, I don't know... But I'm certainly not convinced that I'd be happy with a 25K, for example, panicing just because I tried to import a dud pool... I'm ok(ish) with the panic on a failed write to a non-redundant storage. I expect it by now... I agree, forcing a

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jim Mauro
Where does the win come from with directI/O? Is it 1), 2), or some combination? If its a combination, what's the percentage of each towards the win? That will vary based on workload (I know, you already knew that ... :^). Decomposing the performance win between what is gained as a

Re: [zfs-discuss] About bug 6486493 (ZFS boot incompatible with

2007-10-04 Thread Ivan Wang
This bug was rendered moot via 6528732 in build snv_68 (and s10_u5). We now store physical devices paths with the vnodes, so even though the SATA framework doesn't correctly support open by devid in early boot, we But if I read it right, there is still a problem in SATA framework (failing

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Roch - PAE
Jim Mauro writes: Where does the win come from with directI/O? Is it 1), 2), or some combination? If its a combination, what's the percentage of each towards the win? That will vary based on workload (I know, you already knew that ... :^). Decomposing the performance

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread eric kustarz
Client A - import pool make couple-o-changes Client B - import pool -f (heh) Client A + B - With both mounting the same pool, touched a couple of files, and removed a couple of files from each client Client A + B - zpool export Client A - Attempted import and dropped the panic.

[zfs-discuss] ZFS Crypto Alpha Release

2007-10-04 Thread Darren J Moffat
I'm pleased to announce that the ZFS Crypto project now has Alpha release binaries that you can download and try. Currently we only have x86/x64 binaries available, SPARC will be available shortly. Information on the Alpha release of ZFS Crypto and links for downloading the binaries is here:

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread A Darren Dunham
On Thu, Oct 04, 2007 at 08:36:10AM -0600, eric kustarz wrote: Client A - import pool make couple-o-changes Client B - import pool -f (heh) Client A + B - With both mounting the same pool, touched a couple of files, and removed a couple of files from each client Client A +

[zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-04 Thread Kugutsumen
Lori Alt told me that mountrount was a temporary hack until grub could boot zfs natively. Since build 62, mountroot support was dropped and I am not convinced that this is a mistake. Let's compare the two: Mountroot: Pros: * can have root partition on raid-z: YES * can have root

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Roch - PAE
eric kustarz writes: Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm surprised that this is being met with skepticism considering that Oracle highly recommends direct IO be used, and, IIRC, Oracle performance was the main motivation to adding DIO to UFS back

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Nicolas Williams
On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote: It does, which leads to the core problem. Why do we have to store the exact same data twice in memory (i.e., once in the ARC, and once in the shared memory segment that Oracle uses)? We do not retain 2 copies of the same

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Nicolas Williams
On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote: ...memory utilisation... OK so we should implement the 'lost cause' rfe. In all cases, ZFS must not steal pages from other memory consumers : 6488341 ZFS should avoiding growing the ARC into trouble So the DB memory pages

Re: [zfs-discuss] About bug 6486493 (ZFS boot incompatible with

2007-10-04 Thread Eric Schrock
On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote: This bug was rendered moot via 6528732 in build snv_68 (and s10_u5). We now store physical devices paths with the vnodes, so even though the SATA framework doesn't correctly support open by devid in early boot, we But if I

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Roch - PAE
Nicolas Williams writes: On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote: ...memory utilisation... OK so we should implement the 'lost cause' rfe. In all cases, ZFS must not steal pages from other memory consumers : 6488341 ZFS should avoiding growing the ARC

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Nicolas Williams
On Thu, Oct 04, 2007 at 06:59:56PM +0200, Roch - PAE wrote: Nicolas Williams writes: On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote: So the DB memory pages should not be _contented_ for. What if your executable text, and pretty much everything lives on ZFS? You don't

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Eric Hamilton
I'd like to second a couple of comments made recently: * If they don't regularly do so, I too encourage the ZFS, Solaris performance, and Sun Oracle support teams to sit down and talk about the utility of Direct I/O for databases. * I too suspect that absent Direct I/O (or some ringing

Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-04 Thread Eric Schrock
Remember that you have to maintain an entirely separate slice with yet another boot environment. This causes huge amounts of complexity in terms of live upgrade, multiple BE management, etc. The old mountroot solution was useful for mounting ZFS root, but completely unmaintainable from an

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Roch - PAE
Nicolas Williams writes: On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote: It does, which leads to the core problem. Why do we have to store the exact same data twice in memory (i.e., once in the ARC, and once in the shared memory segment that Oracle uses)? We

Re: [zfs-discuss] Do we have a successful installation method for patch 120011-14?

2007-10-04 Thread Brian H. Nelson
Manually installing the obsolete patch 122660-10 has worked fine for me. Until sun fixes the patch dependencies, I think that is the easiest way. -Brian Bruce Shaw wrote: It fails on my machine because it requires a patch that's deprecated. This email and any files transmitted with it are

Re: [zfs-discuss] Convert Raid-Z to Mirror

2007-10-04 Thread Brian King
Update to this. Before destroying the original pool the first time, offline the disk you plan on re-using in the new pool. Otherwise when you destroy the original pool for the second time it causes issues with the new pool. In fact, if you attempt to destroy the new pool immediately after

Re: [zfs-discuss] Do we have a successful installation method for patch 120011-14?

2007-10-04 Thread Rob Windsor
Yeah, the only thing wrong with that patch is that it eats /etc/sma/snmp/snmpd.conf All is not lost, your original is copied to /etc/sma/snmp/snmpd.conf.save in the process. Rob++ Brian H. Nelson wrote: Manually installing the obsolete patch 122660-10 has worked fine for me. Until sun

Re: [zfs-discuss] Do we have a successful installation method for patch 120011-14?

2007-10-04 Thread Brian H. Nelson
It was 120272-12 that caused ths snmp.conf problem and was withdrawn. 120272-13 has replaced it and has that bug fixed. 122660-10 does not have any issues that I am aware of. It is only obsolete, not withdrawn. Additionally, it appears that the circular patch dependency is by design if you

Re: [zfs-discuss] chgrp -R hangs all writes to pool

2007-10-04 Thread Stuart Anderson
On Mon, Jul 16, 2007 at 09:36:06PM -0700, Stuart Anderson wrote: Running Solaris 10 Update 3 on an X4500 I have found that it is possible to reproducibly block all writes to a ZFS pool by running chgrp -R on any large filesystem in that pool. As can be seen below in the zpool iostat output

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Nathan Kroenert
Erik - Thanks for that, but I know the pool is corrupted - That was kind if the point of the exercise. The bug (at least to me) is ZFS panicing Solaris just trying to import the dud pool. But, maybe I'm missing your point? Nathan. eric kustarz wrote: Client A - import pool make

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Eric Schrock
On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote: Erik - Thanks for that, but I know the pool is corrupted - That was kind if the point of the exercise. The bug (at least to me) is ZFS panicing Solaris just trying to import the dud pool. But, maybe I'm missing your

Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-04 Thread Andre Wenas
Hi, Using bootroot I can do seperate /usr filesystem since b64. I can also do snapshot, clone and compression. Rgds, Andre W. Kugutsumen wrote: Lori Alt told me that mountrount was a temporary hack until grub could boot zfs natively. Since build 62, mountroot support was dropped and I am

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Nathan Kroenert
Awesome. Thanks, Eric. :) This type of feature / fix is quite important to a number of the guys in the our local OSUG. In particular, they are adamant that they cannot use ZFS in production until it stops panicing the whole box for isolated filesystem / zpool failures. This will be a big

Re: [zfs-discuss] O.T. patches for OpenSolaris

2007-10-04 Thread Dick Davies
On 30/09/2007, William Papolis [EMAIL PROTECTED] wrote: Henk, By upgrading do you mean, rebooting and installing Open Solaris from DVD or Network? Like, no Patch Manager install some quick patches and updates and a quick reboot, right? You can live upgrade and then do a quick reboot:

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Anton B. Rang
5) DMA straight from user buffer to disk avoiding a copy. This is what the direct in direct i/o has historically meant. :-) line has been that 5) won't help latency much and latency is here I think the game is currently played. Now the disconnect might be because people might feel that the

[zfs-discuss] ZFS for OSX - it'll be in there.

2007-10-04 Thread Dale Ghent
...and eventually in a read-write capacity: http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write- developer-preview-1-1-for-leopard/ Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac OS X to Developers this week. The preview updates a previous build released on

Re: [zfs-discuss] ZFS for OSX - it'll be in there.

2007-10-04 Thread Ben Rockwood
Dale Ghent wrote: ...and eventually in a read-write capacity: http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write- developer-preview-1-1-for-leopard/ Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac OS X to Developers this week. The preview updates a

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jonathan Loran
I've been thinking about this for awhile, but Anton's analysis makes me think about it even more: We all love ZFS, right. It's futuristic in a bold new way, which many virtues, I won't preach tot he choir. But to make it all glue together has some necessary CPU/Memory intensive