Re: [zfs-discuss] HAMMER

2007-10-17 Thread Robert Milkowski
Hello Dave, Tuesday, October 16, 2007, 9:17:30 PM, you wrote: DJ you mean c9n ? ;) DJ does anyone actually *use* compression ? i'd like to see a poll on how many DJ people are using (or would use) compression on production systems that are DJ larger than your little department catch-all

Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-17 Thread Robert Milkowski
Hello Matthew, Wednesday, October 17, 2007, 1:46:02 AM, you wrote: MA Richard Elling wrote: Paul B. Henson wrote: On Fri, 12 Oct 2007, Paul B. Henson wrote: I've read a number of threads and blog posts discussing zfs send/receive and its applicability is such an implementation, but I'm

Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Claus Guttesen
Is the mount using NFSv4? If so, there is likely a midguided mapping of the user/groups between the client and server. While not including BSD info, there is a little bit on NFSv4 user/group mappings at this blog: http://blogs.sun.com/nfsv4 It defaults to nfs ver. 3. As a sidenote samba is

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Dave Johnson
From: Robert Milkowski [EMAIL PROTECTED] LDAP servers with several dozen millions accounts? Why? First you get about 2:1 compression ratio with lzjb, and you also get better performance. a busy ldap server certainly seems a good fit for compression but when i said large i meant, as in bytes

[zfs-discuss] Upgrade from B62 ZFS Boot/Root to B70b

2007-10-17 Thread Brian Hechinger
How painful is this going to be? Completely? -brian -- Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it's just that most of the shit out there is built by people who'd be better suited to making sure that my

Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Paul Kraus
On 10/16/07, Claus Guttesen [EMAIL PROTECTED] wrote: I have created some zfs-partitions. First I create the home/user-partitions. Beneath that I create additional partitions. Then I have do a chown -R for that user. These partitions are shared using the sharenfs=on. The owner- and group-id is

Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Claus Guttesen
I have created some zfs-partitions. First I create the home/user-partitions. Beneath that I create additional partitions. Then I have do a chown -R for that user. These partitions are shared using the sharenfs=on. The owner- and group-id is 1009. These partitions are visible as the

[zfs-discuss] Home fileserver with solaris 10 and zfs

2007-10-17 Thread Sandro
hi I am currently running a linux box as my fileserver at home. It's got eight 250 gig sata2 drives connected to two sata pci controllers and configured as one big raid5 with linux software raid. Linux is (and solaris will be) installed on two separate mirrored disks. I've been playing around

Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Paul Kraus
On 10/17/07, Claus Guttesen [EMAIL PROTECTED] wrote: Did you mount both the parent and all the children on the client ? No, I just assumed that the sub-partitions would inherit the same uid/gid as the parent. I have done a chown -R. Ahhh, the issue is not permissions, but how the

Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Claus Guttesen
Did you mount both the parent and all the children on the client ? No, I just assumed that the sub-partitions would inherit the same uid/gid as the parent. I have done a chown -R. Ahhh, the issue is not permissions, but how the NFS server sees the various directories to

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Carisdad
Dave Johnson wrote: From: Robert Milkowski [EMAIL PROTECTED] LDAP servers with several dozen millions accounts? Why? First you get about 2:1 compression ratio with lzjb, and you also get better performance. a busy ldap server certainly seems a good fit for compression but when i

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
We are using zfs compression across 5 zpools, about 45TB of data on iSCSI storage. I/O is very fast, with small fractional CPU usage (seat of the pants metrics here, sorry). We have one other large 10TB volume for nearline Networker backups, and that one isn't compressed. We already

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Richard Elling
Jonathan Loran wrote: We are using zfs compression across 5 zpools, about 45TB of data on iSCSI storage. I/O is very fast, with small fractional CPU usage (seat of the pants metrics here, sorry). We have one other large 10TB volume for nearline Networker backups, and that one isn't

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
Richard Elling wrote: Jonathan Loran wrote: snip... Do not assume that a compressed file system will send compressed. IIRC, it does not. Let's say, if it were possible to detect the remote compression support, couldn't we send it compressed? With higher compression rates, wouldn't that

Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-17 Thread Paul B. Henson
On Tue, 16 Oct 2007, Matthew Ahrens wrote: I know of customers who are using send|ssh|recv to replicate entire thumpers across the country, in production. I'm sure they'll speak up here if/when they find this thread... Ah, that's who I'd like to hear from :)... Thanks for the secondhand

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Tim Spriggs
Jonathan Loran wrote: Richard Elling wrote: Jonathan Loran wrote: snip... Do not assume that a compressed file system will send compressed. IIRC, it does not. Let's say, if it were possible to detect the remote compression support, couldn't we send it compressed?

[zfs-discuss] Lack of physical memory evidences

2007-10-17 Thread Dmitry Degrave
In pre-ZFS era, we had observable parameters like scan rate and anonymous page-in/-out counters to discover situations when a system experiences a lack of physical memory. With ZFS, it's difficult to use mentioned parameters to figure out situations like that. Has someone any idea what we can

Re: [zfs-discuss] Adding my own compression to zfs

2007-10-17 Thread roland
being at $300 now - a friend of mine just adding another $100 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] df command in ZFS?

2007-10-17 Thread David Runyon
I was presenting to a customer at the EBC yesterday, and one of the people at the meeting said using df in ZFS really drives him crazy (no, that's all the detail I have). Any ideas/suggestions? -- David Runyon Disk Sales Specialist Sun Microsystems, Inc. 4040 Palm Drive Santa Clara, CA 95054

Re: [zfs-discuss] df command in ZFS?

2007-10-17 Thread MC
I asked this recently, but haven't done anything else about it: http://www.opensolaris.org/jive/thread.jspa?messageID=155583#155583 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] df command in ZFS?

2007-10-17 Thread Mike Gerdts
On 10/17/07, David Runyon [EMAIL PROTECTED] wrote: I was presenting to a customer at the EBC yesterday, and one of the people at the meeting said using df in ZFS really drives him crazy (no, that's all the detail I have). Any ideas/suggestions? I suspect that this is related to the notion

[zfs-discuss] GRUB + zpool version mismatches

2007-10-17 Thread Jason King
Apparently with zfs boot, if the zpool is a version grub doesn't recognize, it merely ignores any zfs entries in menu.lst, and apparently instead boots the first entry it thinks it can boot. I ran into this myself due to some boneheaded mistakes while doing a very manual zfs / install at the

[zfs-discuss] Fracture Clone Into FS

2007-10-17 Thread Jason J. W. Williams
Hey Guys, Its not possible yet to fracture a snapshot or clone into a self-standing filesystem is it? Basically, I'd like to fracture a snapshot/clone into is own FS so I can rollback past that snapshot in the original filesystem and still keep that data. Thank you in advance. Best Regards,

Re: [zfs-discuss] Home fileserver with solaris 10 and zfs

2007-10-17 Thread Ian Collins
Sandro wrote: hi I am currently running a linux box as my fileserver at home. It's got eight 250 gig sata2 drives connected to two sata pci controllers and configured as one big raid5 with linux software raid. Linux is (and solaris will be) installed on two separate mirrored disks. I've

Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)

2007-10-17 Thread Joel Miller
Ok...got a break from the 25xx release... Trying to catch up so...sorry for the late response... The 6120 firmware does not support the Cache Sync command at all... You could try using a smaller blocksize setting on the array to attempt to reduce the number of read/modify/writes that you will

[zfs-discuss] characterizing I/O on a per zvol basis.

2007-10-17 Thread Nathan Kroenert
Hey all - Time for my silly question of the day, and before I bust out vi and dtrace... If there a simple, existing way I can observe the read / write / IOPS on a per-zvol basis? If not, is there interest in having one? Cheers! Nathan. ___