Re: [zfs-discuss] MySQL, Lustre and ZFS

2008-02-07 Thread Atul Vidwansa
Not sure why would you want these 3 together, but lustre and zfs will work together in Lustre 1.8 version. ZFS will be backend filesystem for Lustre servers. See this http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU Cheers, -Atul On Feb 7, 2008 8:39 AM, kilamanjaro [EMAIL

[zfs-discuss] ZFS on Solaris and Mac Leopard

2008-02-07 Thread Klas Heggemann
For some time now, I have had zfs pool, created (if I remeber this correctly) on my x86 opensolaris, with zfs version 6, and have it accessable on my Leopard Mac. I ran the ZFS beta on the Leopard beta with no problems at all. I've now installed the latest zfs RW build on my Leopard and it work

[zfs-discuss] Is swap still needed on c0d0s1 to get crash dumps?

2008-02-07 Thread Roman Morokutti
Lori Alt writes in the netinstall README that a slice should be available for crash dumps. In order to get this done the following line should be defined within the profile: filesys c0[t0]d0s1 auto swap So my question is, is this still needed and how to access a crash dump if it happened?

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
I just installed nv82 so we'll see how that goes. I'm going to try the recordsize idea above as well. A note about UFS: I was told by our local Admin guru that ZFS turns on write-caching for disks, which is something that a UFS file system should not have turned on, so that if I convert the

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
Unfortunately, I don't know the record size of the writes. Is it as simple as looking @ the size of a file, before and after a client request, and noting the difference in size? This is binary data, so I don't know if that makes a difference, but the average write size is a lot smaller than

Re: [zfs-discuss] zpool destroy core dumps with unavailable iscsi device

2008-02-07 Thread Tim Foster
Hi Ross, On Thu, 2008-02-07 at 08:30 -0800, Ross wrote: While playing around with ZFS and iSCSI devices I've managed to remove an iscsi target before removing the zpool. Now any attempt to delete the pool (with or without -f) core dumps zpool. Any ideas how I get rid of this pool? Yep,

[zfs-discuss] zpool destroy core dumps with unavailable iscsi device

2008-02-07 Thread Ross
While playing around with ZFS and iSCSI devices I've managed to remove an iscsi target before removing the zpool. Now any attempt to delete the pool (with or without -f) core dumps zpool. Any ideas how I get rid of this pool? This message posted from opensolaris.org

[zfs-discuss] NFS device IDs for snapshot filesystems

2008-02-07 Thread A Darren Dunham
I notice that files within a snapshot show a different deviceID to stat than the parent file does. But this is not true when mounted via NFS. Is this a limitation of the NFS client, or just what the ZFS fileserver is doing? Will this change in the future? With NFS4 mirror mounts? -- Darren

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
To avoid making multiple posts, I'll just write everything here: -Moving to nv_82 did not seem to do anything, so I doesn't look like fsync was the issue. -Disabling ZIL didn't do anything either -Still playing with 'recsize' values but it doesn't seem to be doing much...I don't think I have a

[zfs-discuss] Lost intermediate snapshot; incremental backup still possible?

2008-02-07 Thread Ian
I keep my system synchronized to a USB disk from time to time. The script works by sending incremental snapshots to a pool on the USB disk, then deleting those snapshots from the source machine. A botched script ended up deleting a snapshot that was not successfully received on the USB disk.

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
Slight correction. 'recsize' must be a power of 2 so it would be 8192. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
One thing I just observed is that the initial file size is 65796 bytes. When it gets an update, the file size remains @ 65796. Is there a minimum file size? This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-02-07 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 John-Paul Drawneek wrote: | I guess a USB pendrive would be slower than a | harddisk. Bad performance | for the ZIL. A decent pendrive of mine writes at 3-5MB/s. Sure there are faster ones, but any desktop harddisk can write at 50MB/s. If you are

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-02-07 Thread Andy Lubel
With my (COTS) LSI 1068 and 1078 based controllers I get consistently better performance when I export all disks as jbod (MegaCli - CfgEachDskRaid0). I even went through all the loops and hoops with 6120's, 6130's and even some SGI storage and the result was always the same; better

Re: [zfs-discuss] zfs send / receive between different opensolaris versions?

2008-02-07 Thread Albert Lee
On Wed, 2008-02-06 at 13:42 -0600, Michael Hale wrote: Hello everybody, I'm thinking of building out a second machine as a backup for our mail spool where I push out regular filesystem snapshots, something like a warm/hot spare situation. Our mail spool is currently running snv_67

Re: [zfs-discuss] nfs exporting nested zfs

2008-02-07 Thread Nicolas Williams
On Thu, Feb 07, 2008 at 01:54:58PM -0800, Andrew Tefft wrote: Let's say I have a zfs called pool/backups and it contains two zfs'es, pool/backups/server1 and pool/backups/server2 I have sharenfs=on for pool/backups and it's inherited by the sub-zfs'es. I can then nfs mount

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread William Fretts-Saxton
RRD4J isn't a DB, per se, so it doesn't really have a record size. In fact, I don't even know if, when data is written to the binary, whether it is contiguous or not so the amount written may not directly correlate to a proper record-size. I did run your command and found the size patterns

[zfs-discuss] nfs exporting nested zfs

2008-02-07 Thread Andrew Tefft
Let's say I have a zfs called pool/backups and it contains two zfs'es, pool/backups/server1 and pool/backups/server2 I have sharenfs=on for pool/backups and it's inherited by the sub-zfs'es. I can then nfs mount pool/backups/server1 or pool/backups/server2, no problem. If I mount pool/backups

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread Sanjeev Bagewadi
William, It should be fairly easy to find the record size using DTrace. Take an aggregation of the the writes happening (aggregate on size for all the write(2) system calls). This would give fair idea of the IO size pattern. Does RRD4J have a record size mentioned ? Usually if it is a

[zfs-discuss] UFS on zvol Cache Questions...

2008-02-07 Thread Brad Diggs
Hello, I have a unique deployment scenario where the marriage of ZFS zvol and UFS seem like a perfect match. Here are the list of feature requirements for my use case: * snapshots * rollback * copy-on-write * ZFS level redundancy (mirroring, raidz, ...) * compression * filesystem cache control

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-02-07 Thread Joel Miller
Much of the complexity in hardware RAID is in the fault detection, isolation, and management. The fun part is trying to architect a fault-tolerant system when the suppliers of the components can not come close to enumerating most of the possible failure modes. What happens when a drive's

Re: [zfs-discuss] nfs exporting nested zfs

2008-02-07 Thread Cindy . Swearingen
Because of the mirror mount feature that integrated into that Solaris Express, build 77. You can read about here on page 20 of the ZFS Admin Guide: http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf Cindy Andrew Tefft wrote: Let's say I have a zfs called pool/backups and it contains

Re: [zfs-discuss] Is swap still needed on c0d0s1 to get crash dumps?

2008-02-07 Thread Richard Elling
Roman Morokutti wrote: Lori Alt writes in the netinstall README that a slice should be available for crash dumps. In order to get this done the following line should be defined within the profile: filesys c0[t0]d0s1 auto swap So my question is, is this still needed and how to access a

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread johansen
-Still playing with 'recsize' values but it doesn't seem to be doing much...I don't think I have a good understand of what exactly is being written...I think the whole file might be overwritten each time because it's in binary format. The other thing to keep in mind is that the tunables like

Re: [zfs-discuss] ? Removing a disk from a ZFS Storage Pool

2008-02-07 Thread James Andrewartha
Dave Lowenstein wrote: Couldn't we move fixing panic the system if it can't find a lun up to the front of the line? that one really sucks. That's controlled by the failmode property of the zpool, added in PSARC 2007/567 which was integrated in b77. -- James Andrewartha

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread Vincent Fox
-Setting zfs_nocacheflush, though got me drastically increased throughput--client requests took, on average, less than 2 seconds each! So, in order to use this, I should have a storage array, w/battery backup, instead of using the internal drives, correct? I have the option of using a

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-02-07 Thread Kyle McDonald
Andy Lubel wrote: With my (COTS) LSI 1068 and 1078 based controllers I get consistently better performance when I export all disks as jbod (MegaCli - CfgEachDskRaid0). Is that really 'all disks as JBOD'? or is it 'each disk as a single drive RAID0'? It may not sound different on the

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread Daniel Cheng
William Fretts-Saxton wrote: Unfortunately, I don't know the record size of the writes. Is it as simple as looking @ the size of a file, before and after a client request, and noting the difference in size? This is binary data, so I don't know if that makes a difference, but the average