Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-30 Thread Jonathan Loran
Bob Friesenhahn wrote: So for Linux, I think that you will also need to figure out an indirect-map incantation which works for its own broken automounter. Make sure that you read all available documentation for the Linux automounter so you know which parts don't actually work. Oh

Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-30 Thread Simon Breden
Hi Rick, OK, thanks for clarifying. As, it seems there's different devices with (1) mixed speed NICs and (2) mixed category cabling being used in your setup, I will simplify things by saying that if you want to get much faster speeds then I think you'll need to ensure you (1) use at least

Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-30 Thread dh
Hello eschrock, I'm a newbe on solaris, would you tell me how I can get/install build 89 of nevada? Fabrice. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-30 Thread Spencer Shepler
On Apr 29, 2008, at 9:35 PM, Tim Wood wrote: Hi, I have a pool /zfs01 with two sub file systems /zfs01/rep1 and / zfs01/rep2. I used [i]zfs share[/i] to make all of these mountable over NFS, but clients have to mount either rep1 or rep2 individually. If I try to mount /zfs01 it shows

Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-30 Thread Bob Friesenhahn
On Tue, 29 Apr 2008, Jonathan Loran wrote: Oh contraire Bob. I'm not going to boost Linux, but in this department, they've tried to do it right. If you use Linux autofs V4 or higher, you can use Sun style maps (except there's no direct maps in V4. Need V5 for direct maps). For our home

Re: [zfs-discuss] Thumper / X4500 marvell driver issues

2008-04-30 Thread Doug
When we installed the Marvell driver patch 125205-07 on our X4500 a few months ago and it started crashing, Sun support just told us to back out that patch. The system has been stable since then. We are still running Solaris 10 11/06 on that system. Is there an advantage to using 125205-07

[zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Chris Siebenmann
I have a test system with 132 (small) ZFS pools[*], as part of our work to validate a new ZFS-based fileserver environment. In testing, it appears that we can produce situations that will run the kernel out of memory, or at least out of some resource such that things start complaining 'bash:

Re: [zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Bill Moore
A silly question: Why are you using 132 ZFS pools as opposed to a single ZFS pool with 132 ZFS filesystems? --Bill On Wed, Apr 30, 2008 at 01:53:32PM -0400, Chris Siebenmann wrote: I have a test system with 132 (small) ZFS pools[*], as part of our work to validate a new ZFS-based fileserver

Re: [zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Jeff Bonwick
Indeed, things should be simpler with fewer (generally one) pool. That said, I suspect I know the reason for the particular problem you're seeing: we currently do a bit too much vdev-level caching. Each vdev can have up to 10MB of cache. With 132 pools, even if each pool is just a single iSCSI

Re: [zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Chris Siebenmann
| Still, I'm curious -- why lots of pools? Administration would be | simpler with a single pool containing many filesystems. The short answer is that it is politically and administratively easier to use (at least) one pool per storage-buying group in our environment. This got discussed in more

Re: [zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-30 Thread Albert Lee
On Tue, 2008-04-29 at 15:02 +0200, Ulrich Graef wrote: Hi, ZFS won't boot on my machine. I discovered, that the lu manpages are there, but not the new binaries. So I tried to set up ZFS boot manually: zpool create -f Root c0t1d0s0 lucreate -n nv88_zfs -A nv88 finally on ZFS