Re: [zfs-discuss] fchmod(2) returns ENOSPC on ZFS

2007-06-15 Thread Manoj Joseph
Matthew Ahrens wrote: In a COW filesystem such as ZFS, it will sometimes be necessary to return ENOSPC in cases such as chmod(2) which previously did not. This is because there could be a snapshot, so overwriting some information actually requires a net increase in space used. That said, we

Re: [zfs-discuss] fchmod(2) returns ENOSPC on ZFS

2007-06-15 Thread Matthew Ahrens
Manoj Joseph wrote: Matthew Ahrens wrote: In a COW filesystem such as ZFS, it will sometimes be necessary to return ENOSPC in cases such as chmod(2) which previously did not. This is because there could be a snapshot, so overwriting some information actually requires a net increase in space

Karma Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Alec Muffett
As I understand matters, from my notes to design the perfect home NAS server :-) 1) you want to give ZFS entire spindles if at all possible; that will mean it can enable and utilise the drive's hardware write cache properly, leading to a performance boost. You want to do this if you can.

[zfs-discuss] Re: ZFS wastesd diskspace?

2007-06-15 Thread Samuel Borgman
Tsk, turns out Mysql was holding on to some old files.. Thanks Daniel! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-15 Thread Joerg Schilling
Ed Ravin [EMAIL PROTECTED] wrote: 15 years ago, Novell Netware started to return a fixed size of 512 for all directories via NFS. If there is still unfixed code, there is no help. The Novell behavior, commendable as it is, did not break the BSD scandir() code, because BSD scandir()

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Mario Goebbels
I definitely [i]don't[/i] want to use flash for swap... You could use a ZVOL on the RAID-Z. Ok, not the most efficient thing, but there's no sort of flag to disable parity on a specific object. I wish there was, exactly for this reason. -mg signature.asc Description: This is a digitally

Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-15 Thread Tomas Ögren
On 14 June, 2007 - Bill Sommerfeld sent me these 0,6K bytes: On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote: The implication of which, of course, is that any app build for Solaris 9 or before which uses scandir may have picked up a broken one. or any app which includes its own

[zfs-discuss] Re: ZFS Boot manual setup in b65

2007-06-15 Thread Douglas Atique
No. There is nothing else the OS can do when it cannot mount the root filesystem. I have the impression (didn't check though) that the pool is made available by just setting some information in its main superblock or something alike (sorry for the imprecisions in ZFS jargon). I understand

Re: Karma Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Ian Collins
Alec Muffett wrote: As I understand matters, from my notes to design the perfect home NAS server :-) 1) you want to give ZFS entire spindles if at all possible; that will mean it can enable and utilise the drive's hardware write cache properly, leading to a performance boost. You want to do

[zfs-discuss] zfs and EMC

2007-06-15 Thread Dominik Saar
Hi there, have a strange behavior if i´ll create a zfs pool at an EMC PowerPath pseudo device. I can create a pool on emcpower0a but not on emcpower2a zpool core dumps with invalid argument Thats my second maschine with powerpath and zfs the first one works fine, even zfs/powerpath

[zfs-discuss] Virtual IP Integration

2007-06-15 Thread Vic Engle
Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a virtual IP that

[zfs-discuss] ZFS zpool created with MPxIO devices question

2007-06-15 Thread James Lefebvre
Customer asks: Will SunCluster 3.2 support ZFS zpool created with MPxIO devices instead of the corresponding DID devices? Will it cause any support issues? Thank you, James Lefebvre -- James Lefebvre - OS Technical Support[EMAIL PROTECTED] (800)USA-4SUN (Reference your

Re: [zfs-discuss] zfs and EMC

2007-06-15 Thread Torrey McMahon
This sounds familiarlike something about the powerpath device not responding to the SCSI inquiry strings. Are you using the same version of powerpath on both systems? Same type of array on both? Dominik Saar wrote: Hi there, have a strange behavior if i´ll create a zfs pool at an EMC

Re: [zfs-discuss] zfs and EMC

2007-06-15 Thread Dominik Saar
Same version on both systems On Monday i´ll concat the facts, what stuck out me ... there are some points there are very strange .. Am Freitag, den 15.06.2007, 10:52 -0400 schrieb Torrey McMahon: This sounds familiarlike something about the powerpath device not responding to the SCSI

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Ruben Wisniewski
Hi Rick, Hmm. Not sure I can do RAID5 (and boot from it). Presumably, though, this would continue to function if a drive went bad. It also prevents ZFS from managing the devices itself, which I think is undesirable (according to the ZFS Admin Guide). I'm also not sure if I have RAID5

Re: Karma Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Rob Windsor
Ian Collins wrote: Alec Muffett wrote: As I understand matters, from my notes to design the perfect home NAS server :-) 1) you want to give ZFS entire spindles if at all possible; that will mean it can enable and utilise the drive's hardware write cache properly, leading to a performance

Re: Karma Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Richard Elling
comments from the peanut gallery... Rob Windsor wrote: Ian Collins wrote: Alec Muffett wrote: As I understand matters, from my notes to design the perfect home NAS server :-) 1) you want to give ZFS entire spindles if at all possible; that will mean it can enable and utilise the drive's

Re: [zfs-discuss] Re: ZFS Boot manual setup in b65

2007-06-15 Thread Eric Schrock
On Fri, Jun 15, 2007 at 04:37:06AM -0700, Douglas Atique wrote: I have the impression (didn't check though) that the pool is made available by just setting some information in its main superblock or something alike (sorry for the imprecisions in ZFS jargon). I understand the OS knows which

Re: [zfs-discuss] Virtual IP Integration

2007-06-15 Thread Richard Elling
Vic Engle wrote: Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a

Re: Karma Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Will Murnane
On 6/15/07, Ian Collins [EMAIL PROTECTED] wrote: Alec Muffett wrote: 2) I've considered pivot-root solutions based around a USB stick or drive; cute, but I want a single tower box and no dongles You could buy a laptop disk, or mount one of these on the motherboard:

Re: [zfs-discuss] Virtual IP Integration

2007-06-15 Thread Victor Engle
Well I suppose complexity is relative. Still, to use Sun Cluster at all I have to install the cluster framework on each node, correct? And even before that I have to install an interconnect with 2 switches unless I direct connect a simple 2 node cluster. My thinking was that ZFS seems to try and

Re: [zfs-discuss] Virtual IP Integration

2007-06-15 Thread Richard Elling
Victor Engle wrote: Well I suppose complexity is relative. Still, to use Sun Cluster at all I have to install the cluster framework on each node, correct? And even before that I have to install an interconnect with 2 switches unless I direct connect a simple 2 node cluster. Yes, rolling your

[zfs-discuss] Re: Karma Re: Re: Best use of 4 drives?

2007-06-15 Thread Tom Kimes
Here's a start for a suggested equipment list: Lian Li case with 17 drive bays (12 3.5 , 5 5.25) http://www.newegg.com/Product/Product.aspx?Item=N82E1682064 Asus M2N32-WS motherboard has PCI-X and PCI-E slots. I'm using Nevada b64 for iSCSI targets:

Re: [zfs-discuss] Re: Karma Re: Re: Best use of 4 drives?

2007-06-15 Thread Neal Pollack
Tom Kimes wrote: Here's a start for a suggested equipment list: Lian Li case with 17 drive bays (12 3.5 , 5 5.25) http://www.newegg.com/Product/Product.aspx?Item=N82E1682064 So it only has room for one power supply. How many disk drives will you be installing? It's not the steady

Re: Karma Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Ian Collins
Rob Windsor wrote: What 8-port-SATA motherboard models are Solaris-friendly? I've hunted and hunted and have finally resigned myself to getting a generic motherboard with PCIe-x16 and dropping in an Areca PCIe-x8 RAID card (in JBOD config, of course). I don't know about 8 port SATA, but I

Re: [zfs-discuss] Re: Karma Re: Re: Best use of 4 drives?

2007-06-15 Thread mike
On 6/15/07, Brian Hechinger [EMAIL PROTECTED] wrote: Hmmm, that's an interesting point. I remember the old days of having to stagger startup for large drives (physically large, not capacity large). Can that be done with SATA? I had to link 2 600w power supplies together to be able to power

[zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Rick Mann
I'm having a heckuva time posting to individual replies (keep getting exceptions). I have a 1U rackmount server with 4 bays. I don't think there's any way to squeeze in a small IDE drive, and I don't want to reduce the swap transfer rate if I can avoid it. The machine has 4 500 GB SATA

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Richard Elling
Rick Mann wrote: I'm having a heckuva time posting to individual replies (keep getting exceptions). I have a 1U rackmount server with 4 bays. I don't think there's any way to squeeze in a small IDE drive, and I don't want to reduce the swap transfer rate if I can avoid it. The machine has 4

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Ian Collins
Rick Mann wrote: Richard Elling wrote: For the time being, these SATA disks will operate in IDE compatibility mode, so don't worry about the write cache. There is some debate about whether the write cache is a win at all, but that is another rat hole. Go ahead and split off some

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Richard Elling
Rick Mann wrote: Richard Elling wrote: For the time being, these SATA disks will operate in IDE compatibility mode, so don't worry about the write cache. There is some debate about whether the write cache is a win at all, but that is another rat hole. Go ahead and split off some space for

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread David Dyer-Bennet
Richard Elling wrote: What I would do: 2 disks: slice 0 3 root (BE and ABE), slice 1 swap/dump, slice 6 ZFS mirror 2 disks: whole disk mirrors I don't understand slice 6 zfs mirror. A mirror takes *two* things of the same size. -- David Dyer-Bennet, [EMAIL PROTECTED];

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Marion Hakanson
[EMAIL PROTECTED] said: Richard Elling wrote: For the time being, these SATA disks will operate in IDE compatibility mode, so don't worry about the write cache. There is some debate about whether the write cache is a win at all, but that is another rat hole. Go ahead and split off some

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Ian Collins
David Dyer-Bennet wrote: Richard Elling wrote: What I would do: 2 disks: slice 0 3 root (BE and ABE), slice 1 swap/dump, slice 6 ZFS mirror 2 disks: whole disk mirrors I don't understand slice 6 zfs mirror. A mirror takes *two* things of the same size. Note the 2 disks:. Ian

Re: [zfs-discuss] Re: Karma Re: Re: Best use of 4 drives?

2007-06-15 Thread Will Murnane
On 6/15/07, Brian Hechinger [EMAIL PROTECTED] wrote: On Fri, Jun 15, 2007 at 02:27:18PM -0700, Neal Pollack wrote: So it only has room for one power supply. How many disk drives will you be installing? It's not the steady state current that matters, as much as it is the ability to handle

Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-15 Thread Richard Elling
Ian Collins wrote: David Dyer-Bennet wrote: Richard Elling wrote: What I would do: 2 disks: slice 0 3 root (BE and ABE), slice 1 swap/dump, slice 6 ZFS mirror 2 disks: whole disk mirrors I don't understand slice 6 zfs mirror. A mirror takes *two* things of the same size. Note the

Re: [zfs-discuss] Re: Re: Mac OS X Leopard to use ZFS

2007-06-15 Thread George
I'm curious about something. Wouldn't ZFS `send` and `recv` be a perfect fit for Apple Time Machine in Leopard if glued together by some scripts? In this scenario you could have an external volume and simply send snapshots to it and reciprocate as needed with recv. Also, it would seem that

Re: [zfs-discuss] Re: Re: Mac OS X Leopard to use ZFS

2007-06-15 Thread Richard Elling
George wrote: I'm curious about something. Wouldn't ZFS `send` and `recv` be a perfect fit for Apple Time Machine in Leopard if glued together by some scripts? In this scenario you could have an external volume and simply send snapshots to it and reciprocate as needed with recv. Also, it